Re: [BUGS] BUG #4047: case preserve for columns
if standard cause head acheing ignore standard You do so with UPPERCASE, so now you use all lowercased. This is good when you do thing simpler. But why you ignore asking of thousands people about case preserving? create table test ( "Id" integer ); SELECT "Id" as "ID" FROM test This cause BIG head acheing while moving to PG from other DBs First of all this cause me and others write SELECT "Id" as "ID" FROM test ( cause to typing 8 + length(FieldName) useless characters multiplied by field count) Instead of SELECT ID FROM test Second it cause to write SELECT id as "ID", name as "Name", phone as "Phone", adreass as "Address" FROM test (typing tonn of useless characters) instead of SELECT * FROM test This quiery rewriting is required because of hashes in programms are case sensetive and '$row->{Name}' is not the same as '$row->{name}' So when in my case MySQL returns 'ID', 'Name', 'Phone' when I executing 'SELECT * FROM' test query. I need rewrite it to SELECT id as "ID", name as "Name", phone as "Phone", adreass as "Address" FROM test Instead of rewriting each reference to field '$row->{Name}' to '$row->{name}' So I expand all my 'SELECT * FROM ' queries Third head acheing: But now you can say: "STOP! Quote field names while creating tables" But this will not stop our headaching, because of in scripts people had writed use SELECT ID FROM or SELECT id FROM or SELECT Id FROM or maybe SELECT iD FROM So this cause me to inspect each line in source files and do these: SELECT "ID" FROM / SELECT "ID" as "id" FROM / SELECT "ID" as "Id" FROM /SELECT "ID" as "iD" FROM instead of rewrite each reference to field value in hash from '$row->{Name}' => '$row->{name}' etc And all this scripts writed for years cause me and not ONLY ME!! big head acheing ... :`-( How do you think how much time this will take from me while moving from some DB to PG? week? month? And how much time will take to set up server variable as 'preserveCase= On'? minute or two? I google WWW and see answers: "case preserving will crash apps that have been writed for years" but my answer will "NO!" because of 'preserveCase' will be OFF by default and this will not cause old apps to crash They will get their lowercased field names as they get them before. BUT 'preserveCase=ON' will make my app AND NOT ONLY MY APP!!! happy to get field names as it used to get them Developers, I know you do good work and you do best as you can. You think about compativility and this is good and I go along with you But can you go along with me and with other who moving from some DB to PG and let them to switch ON preserveCase option and escape from those head acheing described above. Developers, lets do forward step as you do it with UPPERCASE changing it to lowercase I hope you will understand us who moveing to PG and, I hope, you will change your mind about casePreserve Thanks PS: Any who had or has head aching or maybe see that 'casePreserve' DB configuration option will be usefull or handy VOTE for it - Original Message - From: "Tom Lane" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Cc: "John R Pierce" <[EMAIL PROTECTED]>; Sent: Friday, March 21, 2008 6:43 PM Subject: Re: [BUGS] BUG #4047: case preserve for columns <[EMAIL PROTECTED]> writes: It is have no any matter to me if it is upshifted or lowershifted on server sidethe standard does not specify that output of queries MUST be lowershifted/upshifted. Yes it does. I quote SQL92 section 5.2 syntax rule 10: The of a is equivalent to an in which every letter that is a lower- case letter is replaced by the equivalent upper-case letter or letters. This treatment includes determination of equiva- lence, representation in the Information and Definition Schemas, representation in the diagnostics area, and similar uses. In particular "representation in the diagnostics area" would include the case of column headings being returned to the client. If you don't want case folding to happen, you need to use a quoted identifier. In the example you showed, SELECT "Id" AS ID, ... it would have been sufficient to leave off the AS clause. regards, tom lane -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] BUG: PG do not use index
Why PG do not use index? select max(id) from akh_testing_result For MySQL this query take 2-3ms, but Postgres take 132 000ms to execute query. select max(id) from akh_testing_result Query executed in 2,12 minutes, 1 Record(s) Returned max --- 8757173 As we see table has about 9 000 000 records EXPLAIN select max(id) from akh_testing_result "Aggregate (cost=204986.95..204986.95 rows=1 width=4)" " -> Seq Scan on akh_testing_result (cost=0.00..183568.56 rows=8567356 width=4)" Notice that 'id' field is primary index -- DROP TABLE public.akh_testing_result; CREATE TABLE public.akh_testing_result ( id serial NOT NULL, testing_conf_id integer NOT NULL, name varchar(64) NOT NULL, test_group_id integer NOT NULL, test_status_id integer NOT NULL, comment text, bug_commited boolean, best_testing_conf_id integer, best_testing_conf_name varchar(255), test_time integer, physical_memory_peak integer, virtual_memory_peak integer, test_id integer, CONSTRAINT "akh_testing_result_pkey" PRIMARY KEY (id), CONSTRAINT "akh_testing_result_testing_conf_fkey" FOREIGN KEY (testing_conf_id) REFERENCES akh_testing_conf(id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT "akh_testing_result_test_group_fkey" FOREIGN KEY (test_group_id) REFERENCES akh_test_group(id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT "akh_testing_result_test_status_fkey" FOREIGN KEY (test_status_id) REFERENCES akh_properties(id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT "akh_testing_result_test_fkey" FOREIGN KEY (test_id) REFERENCES akh_test(id) ON UPDATE RESTRICT ON DELETE RESTRICT ); -- Indexes CREATE INDEX akh_testing_result_testing_conf ON akh_testing_result USING btree (testing_conf_id); -- Owner ALTER TABLE public.akh_testing_result OWNER TO postgres;
[BUGS] can't delete record from second table in rules of view with join select
Hello, all From sql.ru forum: not important, is it one rule like: create rule v_del as on delete to v do instead ( delete from o1 where id = old.o1_id; delete from o2 where id = old.o2_id; ); or split into two rule like: create rule v_del1 as on delete to v do instead ( delete from o1 where id = old.o1_id; ); create rule v_del2 as on delete to v do instead ( delete from o2 where id = old.o2_id; ); delete from second table (o2) do not delete anything. test case: begin; select version(); create table o1 (id int, val text); create table o2 (id int, val text); create view v as select o1.id as o1_id, o1.val as o1_val, o2.id as o2_id, o2.val as o2_val from o1, o2 where o1.id=o2.id; create rule v_del as on delete to v do instead ( delete from o1 where id = old.o1_id; delete from o2 where id = old.o2_id; ); -- create rule v_del1 as on delete to v do instead ( --delete from o1 where id = old.o1_id; -- ); -- create rule v_del2 as on delete to v do instead ( --delete from o2 where id = old.o2_id; -- ); insert into o1 values (1, 'o1 1'), (2, 'o1 2'), (3, 'o1 3'); insert into o2 values (1, 'o2 1'), (2, 'o2 2'), (3, 'o2 3'); select * from v; delete from v where o1_id = 1; explain analyze delete from v where o2_id = 2; select * from v; select * from o1; select * from o2; rollback; === output === version PostgreSQL 8.3.0 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.3 (Debian 4.2.3-1) --- select * from v; o1_id | o1_val | o2_id | o2_val ---++---+ 1 | o1 1 | 1 | o2 1 2 | o1 2 | 2 | o2 2 3 | o1 3 | 3 | o2 3 (3 rows) --- delete from v where o1_id = 1; seb=> DELETE 0 --- explain analyze delete from v where o2_id = 2; QUERY PLAN --- Nested Loop (cost=50.76..81.18 rows=216 width=6) (actual time=0.040..0.050 rows=1 loops=1) -> Nested Loop (cost=25.38..51.48 rows=36 width=14) (actual time=0.030..0.034 rows=1 loops=1) -> Seq Scan on o1 (cost=0.00..25.38 rows=6 width=10) (actual time=0.014..0.015 rows=1 loops=1) Filter: (id = 2) -> Materialize (cost=25.38..25.44 rows=6 width=4) (actual time=0.012..0.014 rows=1 loops=1) -> Seq Scan on o2 (cost=0.00..25.38 rows=6 width=4) (actual time=0.007..0.008 rows=1 loops=1) Filter: (id = 2) -> Materialize (cost=25.38..25.44 rows=6 width=4) (actual time=0.007..0.010 rows=1 loops=1) -> Seq Scan on o1 (cost=0.00..25.38 rows=6 width=4) (actual time=0.005..0.008 rows=1 loops=1) Filter: (id = 2) Total runtime: 0.135 ms Nested Loop (cost=50.76..81.18 rows=216 width=6) (actual time=0.034..0.034 rows=0 loops=1) -> Nested Loop (cost=25.38..51.48 rows=36 width=10) (actual time=0.019..0.023 rows=1 loops=1) -> Seq Scan on o2 (cost=0.00..25.38 rows=6 width=10) (actual time=0.008..0.009 rows=1 loops=1) Filter: (id = 2) -> Materialize (cost=25.38..25.44 rows=6 width=4) (actual time=0.009..0.011 rows=1 loops=1) -> Seq Scan on o2 (cost=0.00..25.38 rows=6 width=4) (actual time=0.006..0.007 rows=1 loops=1) Filter: (id = 2) -> Materialize (cost=25.38..25.44 rows=6 width=4) (actual time=0.008..0.008 rows=0 loops=1) -> Seq Scan on o1 (cost=0.00..25.38 rows=6 width=4) (actual time=0.007..0.007 rows=0 loops=1) Filter: (id = 2) Total runtime: 0.083 ms (23 rows) --- select * from v; o1_id | o1_val | o2_id | o2_val ---++---+ 3 | o1 3 | 3 | o2 3 (1 запись) --- select * from o1; (all correctly deleted) id | val +-- 3 | o1 3 (1 запись) --- select * from o2; (no one deleted) id | val +-- 1 | o2 1 2 | o2 2 3 | o2 3 (3 rows) seb=> ROLLBACK --- -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG: PG do not use index
[EMAIL PROTECTED] wrote: > Why PG do not use index? > select max(id) from akh_testing_result What PG version are you using? Recent versions should indeed use the index. Perhaps you just need to upgrade. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] can't delete record from second table in rules of view with join select
Hello, all > not important, is it one rule like: > create rule v_del as on delete to v do instead ( >delete from o1 where id = old.o1_id; >delete from o2 where id = old.o2_id; > ); > > or split into two rule like: > create rule v_del1 as on delete to v do instead ( >delete from o1 where id = old.o1_id; > ); > create rule v_del2 as on delete to v do instead ( >delete from o2 where id = old.o2_id; > ); Sorry, after thinking some time about this problem now i may be understand what going on there... %) When first rule was exec - no OLD row anymore in "v" view, nothing will be joined and in second rule OLD value is empty... so my question is changed to: is this expected behavior or a bug ? %) --- -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] why provide cross type arithmetic operators
Tom Lane wrote: > Bruce Momjian <[EMAIL PROTECTED]> writes: > > Added to TODO: > > * Add more cross-data-type operators > > http://archives.postgresql.org/pgsql-bugs/2008-01/msg00189.php > > Uh ... that is exactly 180 degrees away from the point of the thread. OK, I see now, updated: * Simplify integer cross-data-type operators Email text is: http://archives.postgresql.org/pgsql-bugs/2008-01/msg00199.php So it seems that the problem with cross-type operators is not so much having them as having incomplete sets of them. We could fix this case either by adding int2 + int8 or by removing int4 + int8, and simplicity would seem to argue for the latter. -- Bruce Momjian <[EMAIL PROTECTED]>http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4055: Using Like in PostgreSQL 8.2.7 and 8.3.1
In response to "Hugo Rafael Lesme Marquez" <[EMAIL PROTECTED]>: > > The following bug has been logged online: > > Bug reference: 4055 > Logged by: Hugo Rafael Lesme Marquez > Email address: [EMAIL PROTECTED] > PostgreSQL version: 8.2.7 and 8.3.1 > Operating system: Windows 2003 Server SP2 > Description:Using Like in PostgreSQL 8.2.7 and 8.3.1 > Details: > > Using PostgreSQL Database 8.2.7 and 8.3.1 > > With PostgreSQL 8.2.7 > > select * from movimientos where idmovimientos like '178' > > returns > > 178; "2008-02-16"; 4700.00; 2; 1; "2008-02-22 16:39:09.14"; 2; "maricel"; 2 > > Ok!!! > > With PostgreSQL 8.3.1 > > select * from movimientos where idmovimientos like '178' > > returns > > ERROR: operator does not exist: bigint ~~ unknown > LINE 1: select * from movimientos where idmovimientos like '178' > ^ > HINT: No operator matches the given name and argument type(s). You might > need to add explicit type casts. > > ** Error ** > > ERROR: operator does not exist: bigint ~~ unknown > SQL state: 42883 > Dica: No operator matches the given name and argument type(s). You might > need to add explicit type casts. > Caracter: 47 > > What can I do to resolve? 8.3 is fussier about type matching than 8.2 was. The query above is casting idmovimientos to TEXT to do the comparison (since LIKE is pointless on a BIGINT value). 8.3 doesn't do this automatically. If you have a legit reason to do the above comparison, cast that column to TEXT. Although I agree with Alvaro that the query (as provided) is pretty pointless. You could also adjust the pg_cast table to make the cast implicit, which would mimic the 8.2 behaviour. -- Bill Moran Collaborative Fusion Inc. http://people.collaborativefusion.com/~wmoran/ [EMAIL PROTECTED] Phone: 412-422-3463x4023 -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG: PG do not use index
In response to [EMAIL PROTECTED]: > Why PG do not use index? The standard question: when was the last time you did a vacuum analyze on this table? > > select max(id) from akh_testing_result > For MySQL this query take 2-3ms, but Postgres take 132 000ms to execute query. > > select max(id) from akh_testing_result > Query executed in 2,12 minutes, 1 Record(s) Returned > > max > --- > 8757173 > > As we see table has about 9 000 000 records > > EXPLAIN select max(id) from akh_testing_result > "Aggregate (cost=204986.95..204986.95 rows=1 width=4)" > " -> Seq Scan on akh_testing_result (cost=0.00..183568.56 rows=8567356 > width=4)" > > Notice that 'id' field is primary index > > -- DROP TABLE public.akh_testing_result; > CREATE TABLE public.akh_testing_result > ( > id serial NOT NULL, > testing_conf_id integer NOT NULL, > name varchar(64) NOT NULL, > test_group_id integer NOT NULL, > test_status_id integer NOT NULL, > comment text, > bug_commited boolean, > best_testing_conf_id integer, > best_testing_conf_name varchar(255), > test_time integer, > physical_memory_peak integer, > virtual_memory_peak integer, > test_id integer, > CONSTRAINT "akh_testing_result_pkey" PRIMARY KEY (id), > CONSTRAINT "akh_testing_result_testing_conf_fkey" FOREIGN KEY > (testing_conf_id) REFERENCES akh_testing_conf(id) ON UPDATE RESTRICT ON > DELETE RESTRICT, > CONSTRAINT "akh_testing_result_test_group_fkey" FOREIGN KEY (test_group_id) > REFERENCES akh_test_group(id) ON UPDATE RESTRICT ON DELETE RESTRICT, > CONSTRAINT "akh_testing_result_test_status_fkey" FOREIGN KEY (test_status_id) > REFERENCES akh_properties(id) ON UPDATE RESTRICT ON DELETE RESTRICT, > CONSTRAINT "akh_testing_result_test_fkey" FOREIGN KEY (test_id) REFERENCES > akh_test(id) ON UPDATE RESTRICT ON DELETE RESTRICT > ); > -- Indexes > CREATE INDEX akh_testing_result_testing_conf ON akh_testing_result USING > btree (testing_conf_id); > -- Owner > ALTER TABLE public.akh_testing_result OWNER TO postgres; > > > > > > -- Bill Moran Collaborative Fusion Inc. http://people.collaborativefusion.com/~wmoran/ [EMAIL PROTECTED] Phone: 412-422-3463x4023 IMPORTANT: This message contains confidential information and is intended only for the individual named. If the reader of this message is not an intended recipient (or the individual responsible for the delivery of this message to an intended recipient), please be advised that any re-use, dissemination, distribution or copying of this message is prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] can't delete record from second table in rules of view with join select
Sergey Burladyan <[EMAIL PROTECTED]> writes: > Sorry, after thinking some time about this problem now i may be understand > what going on there... %) > When first rule was exec - no OLD row anymore in "v" view, nothing will be > joined and in second rule > OLD value is empty... so my question is changed to: is this expected behavior > or a bug ? %) Yup, it's expected. regards, tom lane -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] BUG #4058: xml_table() segfaults on null
The following bug has been logged online: Bug reference: 4058 Logged by: Frank F. Email address: [EMAIL PROTECTED] PostgreSQL version: 8.2.5-2PGDG Operating system: Centos Description:xml_table() segfaults on null Details: The xml_table() function in the xml2 contrib module causes a segfault in postgres 8.2.5 if it encounters a null value in the column that it's trying to pull XML data from. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] Possible problem with EXTRACT(EPOCH FROM TIMESTAMP)
Hi, I came across something weird that I personally can't explain regarding the EXTRACT function. I've created a few SQLs to let people see what is happening. Perhaps there is an explaination for it. Here is my script to test: -- The first column of the following 2 queries is trying to -- calculate the number of days since Jan 1st 1970. The date -- 2007-04-09 seems to be special because the date seems to change -- at 1am rather than at mid night as I would expect it to. -- SHOW ALL shows my TimeZone is set to "Europe/London" SELECT FLOOR(EXTRACT(EPOCH FROM '2007-04-09 00:59:59'::TIMESTAMP WITHOUT TIME ZONE) / 86400.0),EXTRACT(EPOCH FROM '2007-04-09 00:59:59'::TIMESTAMP WITHOUT TIME ZONE); SELECT FLOOR(EXTRACT(EPOCH FROM '2007-04-09 01:00:00'::TIMESTAMP WITHOUT TIME ZONE) / 86400.0),EXTRACT(EPOCH FROM '2007-04-09 01:00:00'::TIMESTAMP WITHOUT TIME ZONE); -- The following query converts '2007-04-09 00:59:59' into seconds since EPOCH then back to timestamp -- The timestamp loses 1 hour in the conversion SELECT TIMESTAMP WITHOUT TIME ZONE 'epoch' + EXTRACT(EPOCH FROM '2007-04-09 00:59:59'::TIMESTAMP WITHOUT TIME ZONE) * INTERVAL '1 second'; -- For me this query returns '2007-04-08 23:59:59' where the input is '2007-04-09 00:59:59' (one hour earlier) -- Is this down to daylight saving? Daylight saving changes at the end of march in my time zone. test=# SELECT VERSION(); version - PostgreSQL 8.3.0, compiled by Visual C++ build 1400 If anyone is able to give me some information about this it would be most helpful. David. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Possible problem with EXTRACT(EPOCH FROM TIMESTAMP)
On Tue, Mar 25, 2008 at 07:50:30PM -, David Rowley wrote: > Hi, > > I came across something weird that I personally can't explain regarding the > EXTRACT function. > I've created a few SQLs to let people see what is happening. Perhaps there > is an explaination for it. > > Here is my script to test: > > > -- The first column of the following 2 queries is trying to > -- calculate the number of days since Jan 1st 1970. The date > -- 2007-04-09 seems to be special because the date seems to change > -- at 1am rather than at mid night as I would expect it to. > -- SHOW ALL shows my TimeZone is set to "Europe/London" > > SELECT FLOOR(EXTRACT(EPOCH FROM '2007-04-09 00:59:59'::TIMESTAMP WITHOUT > TIME ZONE) / 86400.0),EXTRACT(EPOCH FROM '2007-04-09 00:59:59'::TIMESTAMP > WITHOUT TIME ZONE); This is a bug. Extract(epoch from [timestamp without time zone]) shouldn't work at all. Epoch only has meaning in the context of a timestamptz. Cheers, David. -- David Fetter <[EMAIL PROTECTED]> http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: [EMAIL PROTECTED] Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Possible problem with EXTRACT(EPOCH FROM TIMESTAMP)
David Fetter <[EMAIL PROTECTED]> writes: > On Tue, Mar 25, 2008 at 07:50:30PM -, David Rowley wrote: >> SELECT FLOOR(EXTRACT(EPOCH FROM '2007-04-09 00:59:59'::TIMESTAMP WITHOUT >> TIME ZONE) / 86400.0),EXTRACT(EPOCH FROM '2007-04-09 00:59:59'::TIMESTAMP >> WITHOUT TIME ZONE); > This is a bug. Extract(epoch from [timestamp without time zone]) > shouldn't work at all. Epoch only has meaning in the context of a > timestamptz. One man's bug is another man's feature ;-). The EPOCH code is designed to produce the same result as if you had casted the timestamp to timestamp with timezone --- the important point there being that the stamp will be interpreted as being in your local time zone (per the TimeZone parameter). So the problem with the OP's example is that he's doing SELECT TIMESTAMP WITHOUT TIME ZONE 'epoch' + EXTRACT(EPOCH FROM '2007-04-09 00:59:59'::TIMESTAMP WITHOUT TIME ZONE) * INTERVAL '1 second'; As mentioned in the docs, you really need to add the epoch offset to TIMESTAMP WITH TIME ZONE 'epoch' if you want to arrive at a sane result. That would produce a globally correct timestamp-with-TZ result, which you could cast back to timestamp without TZ if you had a mind to. We used to interpret EPOCH of a timestamp without TZ as if the timestamp were in GMT, which would be a behavior that would produce the results the OP is expecting. That was changed intentionally sometime between 7.2 and 7.3; I forget the reasoning but I'm sure we had a good reason. Probably the easiest way to get the desired result is to use AT TIME ZONE, ie do the extract this way: EXTRACT(EPOCH FROM '2007-04-09 00:59:59'::TIMESTAMP AT TIME ZONE 'GMT') Of course this all begs the question of why the OP *isn't* using timestamp with time zone, or at least setting his zone to GMT if he doesn't want DST-aware calculations. regards, tom lane -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] BUG #4057: SUM returns NULL when given no rows
The following bug has been logged online: Bug reference: 4057 Logged by: Reginald Drake Email address: [EMAIL PROTECTED] PostgreSQL version: 8.2.7 Operating system: Ubuntu linux Description:SUM returns NULL when given no rows Details: Doing something like "SELECT SUM(some_integer_column) FROM some_table WHERE FALSE" gives me NULL, where I would expect 0. Since COUNT does give a meaningful value when applied to zero columns, maybe SUM should do the same. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4057: SUM returns NULL when given no rows
"Reginald Drake" <[EMAIL PROTECTED]> writes: > Doing something like "SELECT SUM(some_integer_column) FROM some_table WHERE > FALSE" gives me NULL, where I would expect 0. Since COUNT does give a > meaningful value when applied to zero columns, maybe SUM should do the same. You might expect that, but the SQL spec is entirely clear on the matter: Case: a) If the COUNT is specified, then the result is the cardinality of TXA. b) If AVG, MAX, MIN, or SUM is specified, then Case: i) If TXA is empty, then the result is the null value. ^^ ii) If AVG is specified, then the result is the average of the values in TXA. iii) If MAX or MIN is specified, then the result is respec- tively the maximum or minimum value in TXA. These results are determined using the comparison rules specified in Subclause 8.2, "". iv) If SUM is specified, then the result is the sum of the values in TXA. If the sum is not within the range of the data type of the result, then an exception condition is raised: data exception-numeric value out of range. regards, tom lane -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Possible problem with EXTRACT(EPOCH FROM TIMESTAMP)
Thanks for the feedback. I would have been more suspicious of time zones if the unusual date had fallen on the 25th of march that year, this is when daylight savings starts in the timezone that postgresql uses on my system. The AT TIME ZONE sounds like the best fix for my problem. Thank you both. David. - Original Message - From: "Tom Lane" <[EMAIL PROTECTED]> To: "David Fetter" <[EMAIL PROTECTED]> Cc: "David Rowley" <[EMAIL PROTECTED]>; Sent: Tuesday, March 25, 2008 11:34 PM Subject: Re: [BUGS] Possible problem with EXTRACT(EPOCH FROM TIMESTAMP) David Fetter <[EMAIL PROTECTED]> writes: On Tue, Mar 25, 2008 at 07:50:30PM -, David Rowley wrote: SELECT FLOOR(EXTRACT(EPOCH FROM '2007-04-09 00:59:59'::TIMESTAMP WITHOUT TIME ZONE) / 86400.0),EXTRACT(EPOCH FROM '2007-04-09 00:59:59'::TIMESTAMP WITHOUT TIME ZONE); This is a bug. Extract(epoch from [timestamp without time zone]) shouldn't work at all. Epoch only has meaning in the context of a timestamptz. One man's bug is another man's feature ;-). The EPOCH code is designed to produce the same result as if you had casted the timestamp to timestamp with timezone --- the important point there being that the stamp will be interpreted as being in your local time zone (per the TimeZone parameter). So the problem with the OP's example is that he's doing SELECT TIMESTAMP WITHOUT TIME ZONE 'epoch' + EXTRACT(EPOCH FROM '2007-04-09 00:59:59'::TIMESTAMP WITHOUT TIME ZONE) * INTERVAL '1 second'; As mentioned in the docs, you really need to add the epoch offset to TIMESTAMP WITH TIME ZONE 'epoch' if you want to arrive at a sane result. That would produce a globally correct timestamp-with-TZ result, which you could cast back to timestamp without TZ if you had a mind to. We used to interpret EPOCH of a timestamp without TZ as if the timestamp were in GMT, which would be a behavior that would produce the results the OP is expecting. That was changed intentionally sometime between 7.2 and 7.3; I forget the reasoning but I'm sure we had a good reason. Probably the easiest way to get the desired result is to use AT TIME ZONE, ie do the extract this way: EXTRACT(EPOCH FROM '2007-04-09 00:59:59'::TIMESTAMP AT TIME ZONE 'GMT') Of course this all begs the question of why the OP *isn't* using timestamp with time zone, or at least setting his zone to GMT if he doesn't want DST-aware calculations. regards, tom lane -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] BUG #4059: Vacuum full not always cleaning empty tables
The following bug has been logged online: Bug reference: 4059 Logged by: Mark Steben Email address: [EMAIL PROTECTED] PostgreSQL version: 8.2.5 Operating system: Linux redhat Description:Vacuum full not always cleaning empty tables Details: I sent an email to 'pgsql-bugs@postgresql.org' entitled 'Possible bug with VACUUM FULL' and also an addendum But neglected to also send this bug report form. You can refer to those emails; in a nutshell, I have found that VACUUM FULL will not clean dead tuples out of an empty table when other queries are accessing the database. Test plans/scenarios are included. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] BUG #4060: libpq - large stack causes TCP/IP error
The following bug has been logged online: Bug reference: 4060 Logged by: Bruce Nairn Email address: [EMAIL PROTECTED] PostgreSQL version: 8.3.1 Operating system: Windows XP Pro Description:libpq - large stack causes TCP/IP error Details: I'm using the libpq interface on Win XP (x86) to connect to a postgreSQL database. Compiling with microsoft Visual Studio 2005 (express edition). When the program stack size increases past a certain size, PQsetdbLogin returns a TCP/IP buffer error: could not connect to server: No buffer space available (0x2747/10055). A sample code that reproduces the error is attached below. Any suggestions or fixes would be appreciated. Thanks, Bruce #include #include #include #include "libpq-fe.h" /* - *This is a test program to demonstrate problem connecting to a PostgreSQL *database when the stack is set to exceed approx 0x32cd *(decimal: 852295680) seems to depend slightly on shell environment * *Compile on Windows XP with MS Visual Studio 2005: * > cl /EHsc /MD /F0x3300 testconn4.cpp /Ic:\postgresql\postgresql-8.3.1\src\interfaces\libpq */Ic:\postgresql\postgresql-8.3.1\src\include /Ic:\postgresql\postgresql-8.3.1\src */link c:\postgresql\postgresql-8.3.1\src\interfaces\libpq\release\libpqdll.lib * *gives: * * >testconn4 guest 103 * start * inputs first * PQsetdbLogin * Connection to database failed Error returned: could not connect to server: No buffer space available (0x2747/10055) *Is the server running on host "store.cincomsmalltalk.com" and accepting *TCP/IP connections on port 5432? * * compile with /F0x3200 (or smaller, or omit), gives expected output: * * >testconn4 guest 103 * start * inputs first * PQsetdbLogin * success! * *Is there a solution to allow this to work with a program that requires *a stack this large? */ using namespace std; extern "C" void LoadWaterBody(const char* user, const char* setnum){ std::string newuser = user; std::string newsetnum = setnum; PGconn* conn; PGresult* pgrs; char* pghost; char* pgport; char* pgoptions; char* pgtty; char* dbName; char* login; char* pwd; /* * Google found this database available for public connection, so I use it for * demonstration purposes... Thanks! * http://www.cincomsmalltalk.com/CincomSmalltalkWiki/PostgreSQL+Access+Page# guestAccess */ pghost="store.cincomsmalltalk.com"; pgport="5432"; pgoptions=NULL; pgtty=NULL; dbName="store_public"; login="guest"; pwd="guest"; cout << "inputs first\n"; conn = PQsetdbLogin(pghost, pgport, pgoptions, pgtty, dbName, login, pwd); cout << "PQsetdbLogin\n"; if (PQstatus(conn) == CONNECTION_BAD) { // did the database connection fail? cerr << "Connection to database failed " << "Error returned: " << PQerrorMessage(conn) << endl; exit(1); } // normally db query would go here... cout << "success!"; } int main(int argc, const char* argv[] ) { cout << "start\n"; LoadWaterBody (argv[1], argv[2]); } -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4058: xml_table() segfaults on null
"Frank F." <[EMAIL PROTECTED]> writes: > The xml_table() function in the xml2 contrib module causes a segfault in > postgres 8.2.5 if it encounters a null value in the column that it's trying > to pull XML data from. Seems to have been broken since day one :-(. Will fix. regards, tom lane -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4059: Vacuum full not always cleaning empty tables
Mark Steben wrote: > I sent an email to 'pgsql-bugs@postgresql.org' entitled > 'Possible bug with VACUUM FULL' and also an addendum > But neglected to also send this bug report form. > You can refer to those emails; in a nutshell, I have found that VACUUM FULL > will not clean dead tuples out of an empty table when other queries are > accessing the database. Test plans/scenarios are included. Yes, if other sessions have open transactions whose start time predate the start time of the transaction that deletes of tuples, those tuples cannot be removed by VACUUM. This is known and expected. Is this what's happening here? -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs