[BUGS] plperl & sort

2008-11-04 Thread Jeff

I've ran into this interesting problem.
It seems that while you can call sort() in a trusted plperl func you  
cannot access $a & $b which effectively makes it useless.


I've tested this on 8.2.11, 8.3.5, and the nov 4 snapshot on ftp.postgresql.org
In all cases its on a mac with perl 5.8.8.

I also tested on Linux with 8.2.5 (yes yes, I know I need to upgrade!)  
with the same results.


Is this intended behavior?


create or replace function trustedsort()
returns int
as $$

my @arr = (5, 4, 3, 2, 1);

my @sorted = sort { elog(NOTICE, "$a $b"); $a <=> $b } @arr;

return 1;

$$
language 'plperl';

create or replace function untrustedsort()
returns int
as $$

my @arr = (5, 4, 3, 2, 1);

my @sorted = sort { elog(NOTICE, "$a $b"); $a <=> $b } @arr;

return 1;

$$
language 'plperlu';


select trustedsort();
select untrustedsort();

drop function trustedsort();
drop function untrustedsort();



CREATE FUNCTION
CREATE FUNCTION
psql:stupid_plperl.sql:28: NOTICE:
psql:stupid_plperl.sql:28: NOTICE:
psql:stupid_plperl.sql:28: NOTICE:
psql:stupid_plperl.sql:28: NOTICE:
psql:stupid_plperl.sql:28: NOTICE:
psql:stupid_plperl.sql:28: NOTICE:
psql:stupid_plperl.sql:28: NOTICE:
psql:stupid_plperl.sql:28: NOTICE:
 trustedsort
-
   1
(1 row)

psql:stupid_plperl.sql:29: NOTICE:  5 4
psql:stupid_plperl.sql:29: NOTICE:  3 2
psql:stupid_plperl.sql:29: NOTICE:  4 2
psql:stupid_plperl.sql:29: NOTICE:  4 3
psql:stupid_plperl.sql:29: NOTICE:  2 1
 untrustedsort
---
 1
(1 row)

DROP FUNCTION
DROP FUNCTION

--
Jeff Trout <[EMAIL PROTECTED]>
http://www.stuarthamm.net/
http://www.dellsmartexitin.com/




--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] plperl & sort

2008-11-04 Thread Jeff


On Nov 4, 2008, at 2:27 PM, Alex Hunsaker wrote:


On Tue, Nov 4, 2008 at 09:02, Jeff <[EMAIL PROTECTED]> wrote:

I've ran into this interesting problem.
It seems that while you can call sort() in a trusted plperl func  
you cannot

access $a & $b which effectively makes it useless.


Hrm works for me if I take out the elog from sort()




I came across this because I was attempting to sort some data (an  
array of hashrefs) in to reverse order and got very odd results.. some  
elogging showed $a and $b were not what they should have been and  
after more and more digging I was able to widdle it down to the simple  
case I posted.  When I tried having it call a sub instead of an  
anonymous block it would complain the sub didn't exist. (I have other  
plperl functions that have subs declared and they all work fine, but I  
never used them with sort before).


I'll have some more time to tinker with it tomorrow. I'm reasonably  
sure its got something to do with the Safe module and some magic-fu we  
may need.  Looking at plperl we do allow sort so I'm not sure why $a &  
$b disappear..


--
Jeff Trout <[EMAIL PROTECTED]>
http://www.stuarthamm.net/
http://www.dellsmartexitin.com/




--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] plperl & sort

2008-11-05 Thread Jeff


On Nov 4, 2008, at 4:43 PM, Andrew Dunstan wrote:



OK, the first thing to note is that there is an easy workaround,  
which is to use a sort routine that doesn't need $a/$b. Example:


  create or replace function mysort() returns text language plperl  
as $f$


  my $sfunc = sub ($$) { $_[0] <=> $_[1] };

  my @vals = (5,3,4,2,7);

  return join(' ',sort $sfunc @vals);

  $f$;



Andrew for the win!

Thanks a lot!

I agree, a documentation note would be fine for this rather doing all  
sorts of complicated perl trickery.


--
Jeff Trout <[EMAIL PROTECTED]>
http://www.stuarthamm.net/
http://www.dellsmartexitin.com/




--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] GIN overlap vs empty arrays

2010-03-25 Thread Jeff
Ran into this and I'm trying to decide if this is functioning as  
designed or if this is a bug that should be fixed: (PG 8.4.2)


create table gintest
(
idList int[],
foo text
);

create index gintest_gin_idx on gintest using gin(idList gin__int_ops);

insert into gintest(idlist, foo) values (array[1,2,3], 'bar');

select * from gintest where idList && array[]::int[];

CREATE TABLE
CREATE INDEX
INSERT 0 1
psql:ginproblem.sql:11: ERROR:  GIN indexes do not support whole-index  
scans


I came across this in a production setting and widdled it down to  
this.  In a nutshell using overlap with an empty (not null) array  
causes this error.   Should there be a short circuit to bail on zero- 
length input to overlap since you can't overlap with nothing.  (if you  
pass in a plain null it works fine).


In the production setting it is tickled by the array being produced by  
a subselect that uses array_accum to gather a list of ids to pull up.


If this is the proper behavior I'll deal with it (in the end the  
result is the same - no rows).  Just a bit surprised by it.


--
Jeff Trout 
http://www.stuarthamm.net/
http://www.dellsmartexitin.com/




--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] GIN overlap vs empty arrays

2010-03-25 Thread Jeff


On Mar 25, 2010, at 11:44 AM, Tom Lane wrote:

Hmm, that case is supposed to work, in 8.3 and later ... but it  
doesn't

because of a stupid typo in contrib/intarray.



This works. -
thanks!


--
Jeff Trout 
http://www.stuarthamm.net/
http://www.dellsmartexitin.com/




--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] to_date bug

2002-06-13 Thread Jeff


select to_date('December 12 2002','Month dd ');
  to_date   

 2002-12-02

select to_date('January 12 2002','Month dd ');
  to_date   

 0005-06-24 <- Problem right there.


select to_date('January 12, 2002','Month dd, ');
  to_date   

 2002-01-01   <-- It should be 2002-01-12

select to_date('December 12, 2002','Month dd '); 
  to_date   

 2002-12-02


For some reason it's getting caught up on January.  I know these dates are
Unambiguous however I think there might be a bug there.

It would be my first bug ;)








---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [BUGS] BUG #5841: rank()+1 fails, 1+rank() succeeds

2011-01-18 Thread jeff
Thanks for clarifying. I've submitted a note on the interactive  
version of the docs, and attached a small patch to make explicit that  
a window function is followed _immediately_ by an OVER clause, as the  
syntax[1] indicates.


Regards,
Jeff


[1]  
http://www.postgresql.org/docs/current/static/sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS


Quoting Alvaro Herrera :


Excerpts from Jeff Turner's message of lun ene 17 07:34:29 -0300 2011:


test=# select *, rank()+1 over (partition by bar) from t;
ERROR:  syntax error at or near "over"
LINE 1: select *, rank()+1 over (partition by bar) from t;


The "over" stuff is part of the expression; you can't add the +1 in the
middle.  This works:

select *, rank() over (partition by bar) + 1 from t;

--
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support




diff --git a/doc/src/sgml/advanced.sgml b/doc/src/sgml/advanced.sgml
index 784b40a..9cb7173 100644
--- a/doc/src/sgml/advanced.sgml
+++ b/doc/src/sgml/advanced.sgml
@@ -371,7 +371,7 @@ SELECT depname, empno, salary, avg(salary) OVER (PARTITION BY depname) FROM emps
 

 A window function call always contains an OVER clause
-following the window function's name and argument(s).  This is what
+directly following the window function's name and argument(s).  This is what
 syntactically distinguishes it from a regular function or aggregate
 function.  The OVER clause determines exactly how the
 rows of the query are split up for processing by the window function.
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 04769f1..88345ec 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -11184,8 +11184,8 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab;
   
The built-in window functions are listed in
.  Note that these functions
-   must be invoked using window function syntax; that is an
-   OVER clause is required.
+   must be invoked using window function syntax; that is must
+   be immediately followed by an OVER clause.
   
 
   

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #6038: configure warnings on sys/socket.h and netinet/in.h, make fails

2011-05-25 Thread Jeff

The following bug has been logged online:

Bug reference:  6038
Logged by:  Jeff
Email address:  jeffadam...@gmail.com
PostgreSQL version: 9.0.4
Operating system:   Windows XP 32bit
Description:configure warnings on sys/socket.h and netinet/in.h,
make fails
Details: 

Building on MinGW/Msys on windows, here's the entire configure output. 
Configure completes (with the warnings you see below, which specifically
said to report as bugs) and then make fails.  

I would not be surprised if I'm missing some other dependency, but it isn't
immediately clear which one...


CONFIGURE OUTPUT ***

checking build system type... i686-pc-mingw32
checking host system type... i686-pc-mingw32
checking which template to use... win32
checking whether to build with 64-bit integer date/time support... yes
checking whether NLS is wanted... no
checking for default port number... 5432
checking for block size... 8kB
checking for segment size... 1GB
checking for WAL block size... 8kB
checking for WAL segment size... 16MB
checking for gcc... gcc
checking for C compiler default output file name... a.exe
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables... .exe
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking if gcc supports -Wdeclaration-after-statement... yes
checking if gcc supports -Wendif-labels... yes
checking if gcc supports -fno-strict-aliasing... yes
checking if gcc supports -fwrapv... yes
checking whether the C compiler still works... yes
checking how to run the C preprocessor... gcc -E
checking allow thread-safe client libraries... yes
checking whether to build with Tcl... no
checking whether to build Perl modules... no
checking whether to build Python modules... no
checking whether to build with GSSAPI support... no
checking whether to build with Kerberos 5 support... no
checking whether to build with PAM support... no
checking whether to build with LDAP support... no
checking whether to build with Bonjour support... no
checking whether to build with OpenSSL support... no
configure: WARNING: *** Readline does not work on MinGW --- disabling
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for ld used by GCC... /bin/ld
checking if the linker (/bin/ld) is GNU ld... yes
checking for ranlib... ranlib
checking for strip... strip
checking whether it is possible to strip libraries... yes
checking for ar... ar
checking for dlltool... dlltool
checking for dllwrap... dllwrap
checking for windres... windres
checking for tar... /bin/tar
checking whether ln -s works... no, using cp -p
checking for gawk... gawk
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for bison... /bin/bison
configure: using bison (GNU Bison) 2.4.2
checking for flex... /bin/flex
configure: using flex 2.5.35
checking for perl... /bin/perl
configure: using perl 5.8.8
checking for main in -lm... yes
checking for library containing setproctitle... no
checking for library containing dlopen... none required
checking for library containing socket... none required
checking for library containing shl_load... no
checking for library containing getopt_long... none required
checking for library containing crypt... no
checking for library containing fdatasync... no
checking for library containing gethostbyname_r... no
checking for library containing shmget... no
checking for inflate in -lz... yes
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... no
checking for stdint.h... no
checking for unistd.h... yes
checking crypt.h usability... no
checking crypt.h presence... no
checking for crypt.h... no
checking dld.h usability... no
checking dld.h presence... no
checking for dld.h... no
checking fp_class.h usability... no
checking fp_class.h presence... no
checking for fp_class.h... no
checking getopt.h usability... yes
checking getopt.h presence... yes
checking for getopt.h... yes
checking ieeefp.h usability... yes
checking ieeefp.h presence... yes
checking for ieeefp.h... yes
checking ifaddrs.h usability... no
checking ifaddrs.h presence... no
checking for ifaddrs.h... no
checking langinfo.h usability... no
checking langinfo.h presence... no
checking for langinfo.h... no
checking poll.h usability... yes
checking poll.h presence... yes
checking for poll.h... yes
checking pwd.h usability... yes
checking pwd.h presence... yes
checking for pwd.h... yes
checking sys/ioctl.h usability... yes
checking sys/ioctl.h presence... yes
checking for sys/ioctl.h... yes
checking sys/ipc.h usability... no
checking sys/ipc.h presence... no
c

[BUGS] BUG #6704: ALTER EXTENSION postgis SET SCHEMA leaves dangling relations

2012-06-22 Thread jeff
The following bug has been logged on the website:

Bug reference:  6704
Logged by:  Jeff Frost
Email address:  j...@pgexperts.com
PostgreSQL version: 9.1.4
Operating system:   Windows and Linux
Description:

DROP and CREATE extension appear to work fine, but if you ALTER EXTENSION
postgis SET SCHEMA foo, it leaves a few relations behind.  Then if you drop
that schema, you can't pg_dump the DB anymore.

See reproducible test case below.  Note a the bottom that even though the
ALTER left items in the original schema, I'm able to drop that schema
without CASCADE and also if I then DROP EXTENSION, it happily gets rid of
those.

Test case:
pgx-test:~ $ createdb ext_test
pgx-test:~ $ psql ext_test
psql (9.1.4)
Type "help" for help.

ext_test=# create schema test;
CREATE SCHEMA
Time: 27.736 ms
ext_test=# create EXTENSION postgis with schema test;
CREATE EXTENSION
Time: 764.102 ms
ext_test=# alter EXTENSION postgis set schema public;
ALTER EXTENSION
Time: 221.224 ms
ext_test=# select oid, nspname from pg_namespace ;
   oid   |  nspname   
-+
  99 | pg_toast
   11124 | pg_temp_1
   11125 | pg_toast_temp_1
  11 | pg_catalog
2200 | public
   12257 | information_schema
 6981446 | test
(7 rows)

Time: 0.256 ms
ext_test=# select oid, relname, relnamespace from pg_class where
relnamespace = 6981446;

   oid   |   relname| relnamespace 
-+--+--
 6981694 | spatial_ref_sys_pkey |  6981446
(1 row)

Time: 36.072 ms
ext_test=# select oid, proname, pronamespace from pg_proc where pronamespace
= 6981446;
 oid | proname | pronamespace 
-+-+--
(0 rows)

Time: 7.797 ms
ext_test=# select oid, typname, typnamespace from pg_type where typnamespace
= 6981446;
   oid   |  typname   | typnamespace 
-++--
 6981689 | spatial_ref_sys|  6981446
 6981688 | _spatial_ref_sys   |  6981446
 6981995 | geography_columns  |  6981446
 6981994 | _geography_columns |  6981446
 6982099 | geometry_columns   |  6981446
 6982098 | _geometry_columns  |  6981446
 6982541 | raster_columns |  6981446
 6982540 | _raster_columns|  6981446
 6982550 | raster_overviews   |  6981446
 6982549 | _raster_overviews  |  6981446
(10 rows)

Time: 7.844 ms
ext_test=# select oid, conname, connamespace from pg_constraint where
connamespace = 6981446;
   oid   |  conname   | connamespace 
-++--
 6981690 | spatial_ref_sys_srid_check |  6981446
 6981695 | spatial_ref_sys_pkey   |  6981446
(2 rows)

Time: 0.201 ms

ext_test=# DROP EXTENSION postgis ;
DROP EXTENSION
Time: 214.645 ms
ext_test=# select oid, relname, relnamespace from pg_class where
relnamespace = 6981446;

 oid | relname | relnamespace 
-+-+--
(0 rows)

Time: 49.484 ms
ext_test=# select oid, proname, pronamespace from pg_proc where pronamespace
= 6981446;
 oid | proname | pronamespace 
-+-+--
(0 rows)

Time: 7.698 ms
ext_test=# select oid, typname, typnamespace from pg_type where typnamespace
= 6981446;
 oid | typname | typnamespace 
-+-+--
(0 rows)

Time: 7.864 ms
ext_test=# select oid, conname, connamespace from pg_constraint where
connamespace = 6981446;
 oid | conname | connamespace 
-+-+--
(0 rows)

Time: 0.144 ms
ext_test=# 
ext_test=# \q


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #7902: lazy cleanup of extraneous WAL files can cause out of disk issues

2013-02-22 Thread jeff
The following bug has been logged on the website:

Bug reference:  7902
Logged by:  Jeff Frost
Email address:  j...@pgexperts.com
PostgreSQL version: 9.2.3
Operating system:   Ubuntu 12.04
Description:

While doing acceptance testing on a new Ubuntu 12.04 PostgreSQL server
running 9.2.3, we set checkpoint_segments = 128,
checkpoint_completion_target = 0.9 and placed pg_xlog on a separate 20G
partition. Also, archive_mode = off on this system.

According to the docs, you would expect the system to attempt to keep the
WAL files down close to 3 * checkpoint_segments + 1.  Unfortunately, this
does not appear to be the case because a pgbench run would run the pg_xlog
partition out of space.

The pgbench run script looks like this:

#!/bin/bash

dropdb bench
createdb bench
pgbench -i -s 1000 bench
vacuumdb -a --analyze-only
psql -c "checkpoint"
pgbench -c 64 -j 16 -r -T 600 bench

While the pgbench does cause lots of xlog based checkpoints, they never seem
to remove more than a few files and often pg_xlog grows to more than 20G and
the postgresql service falls over.

After moving pg_xlog to a larger partition, it seems it peaks at about 22G
in size. 

A manual checkpoint after the run always brings it back down to ~ 4G in
size.

Interestingly, I was unable to reproduce this with 9.2.3 on our inhouse test
system; however, the inhouse system has much less RAM and CPU resources, so
this may only be an issue on larger systems. The system that exhibits the
issue has 128G of RAM and 16 cores (32 with hyperthreading). 

I also tested 9.2.2 on the affected system and it acted the same.

Hope to test 9.1.8 in the next few days.



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #8225: logging options don't change after reload

2013-06-11 Thread jeff
The following bug has been logged on the website:

Bug reference:  8225
Logged by:  Jeff Frost
Email address:  j...@pgexperts.com
PostgreSQL version: 9.1.8
Operating system:   various
Description:

I've seen this a few times on client servers but still can't seem to boil it
down to a nice reproducible test case.  

What happens is that we change various logging options in postgresql.conf,
then reload, and every so often, the settings don't seem to take effect even
though they are logged as being changed.

Here's an example from a recent logging run:

grep -hi checkpoint *.csv
2013-05-31 12:58:33.051 MDT,,,5398,,5189c940.1516,8,,2013-05-07 21:40:48
MDT,,0,LOG,0,"parameter ""log_checkpoints"" changed to
""on""",""
2013-05-31 15:18:36.131 MDT,,,5398,,5189c940.1516,22,,2013-05-07 21:40:48
MDT,,0,LOG,0,"parameter ""log_checkpoints"" removed from configuration
file, reset to default",""

So, note that log_checkpoints was logged as being changed, then 2hrs 20mins
later it's logged as being set back to default, but there's not a single
checkpoint logged in that time period.  I checked and checkpoint_timeout is
indeed set to 300s.  

Also, in addition, we change the log_filename to datacollection-%H and while
that also shows up in the log during the reload:

2013-05-31 12:58:33.051 MDT,,,5398,,5189c940.1516,9,,2013-05-07 21:40:48
MDT,,0,LOG,0,"parameter ""log_filename"" changed to
""datacollection-%H""",""

and the logfile is created, postgresql continues to log to the old filename
for the duration of the logging run.

I've seen this on recent 9.0, 9.1 and 9.2 versions.  This particular run was
on 9.1.8.



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #8315: GRANTS allowed on extension functions, but not dumped by pg_dump

2013-07-18 Thread jeff
The following bug has been logged on the website:

Bug reference:  8315
Logged by:  Jeff Frost
Email address:  j...@pgexperts.com
PostgreSQL version: 9.2.4
Operating system:   Scientific Linux 6
Description:

Simple test case:


pgx-test:~ $ createdb permtest
pgx-test:~ $ psql permtest
psql (9.2.4)
Type "help" for help.


permtest=# create extension dblink;
CREATE EXTENSION


permtest=# create role permtestuser with login nosuperuser;


permtest=# grant EXECUTE on FUNCTION dblink(text) to permtestuser;
GRANT


pgx-test:~ $ pg_dump -s permtest | grep GRANT
GRANT ALL ON SCHEMA public TO postgres;
GRANT ALL ON SCHEMA public TO PUBLIC;


I imagine we are expecting people to write security definer wrapper
functions, but if so, we should probably not allow them to grant permissions
on extension functions (and tables?) if we aren't going to preserve them in
the dump.



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] unexpected RULE behavior

2008-07-07 Thread Jeff Davis
On Mon, 2008-07-07 at 12:16 +0200, Mathias Palm wrote:
> CREATE TABLE data (id SERIAL, title VARCHAR);
> CREATE TABLE data_copy(id INT4, title VARCHAR);
> CREATE RULE make_copy AS ON INSERT TO data DO INSERT INTO data_copy
> (id,title) VALUES (NEW.id, NEW.title);
> INSERT INTO data (title) VALUES ('test');
> 
> database=# SELECT * FROM data;
>  id | title
> +---
>   1 | test
> (1 Zeile)
> 
> database=# SELECT * FROM data_copy;
>  id | title
> +---
>   2 | test
> (1 Zeile)
> 

This is a feature, not a bug. To get the kind of behavior you want, you
need to either use a trigger, or have the rule call a function that
inserts the same values into both tables.

SERIAL implicitly calls nextval() on a sequence to get the default value
to insert. The rule rewrites the query into two queries, and each query
calls nextval(), and each call of nextval() returns a different value.

One way to think about it is that a rule doesn't save any values away in
variables. Functions do save the arguments as variables on the stack,
and those variables can be used multiple times, so that's why calling a
function works.

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #4324: Default value for a column is not returned in select when column has not been explicitly set

2008-07-26 Thread Jeff Galyan

The following bug has been logged online:

Bug reference:  4324
Logged by:  Jeff Galyan
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.3.3
Operating system:   Linux
Description:Default value for a column is not returned in select
when column has not been explicitly set
Details: 

When a column does not have a value explicitly set, v8.3.3 is not returning
the default value for the column, as 8.2.3 used to (per section 11.5 of the
SQL specification).  The purpose of setting a default value for a column is
so a value will be returned if the column has not been explicitly set.  If a
nullable column has no value but does have a default, the specification
requires that the default value be returned.  If the column's value has been
explicitly set, then the value in the column must be returned.  Further,
when a default is specified in the column descriptor, INSERTs which omit
setting a value for the column should automatically insert the default value
into the column.  Again, the behavior in 8.2 conformed with the SQL
specification, section 11.5.  8.3 is not behaving per the spec.

Example:
Take an existing table with some data in it and add a nullable column of
type boolean with default value true.  In 8.2, 'select bool_column from
my_table' would have returned 'true' for all rows where the column had not
been explicitly set (which should be all of them at this point).  Subsequent
inserts would have the value automatically set to 'true' if no value was
specified, or whatever value is explicitly specified.  In 8.3, this case
will return NULL for all rows where the value has not been explicitly
specified.  Per sec. 11.5 of the SQL spec, the behavior of v8.2 is correct.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] non-deterministic error related to MIN/MAX optimization

2008-08-25 Thread Jeff Davis
This problem exists in 8.3.3:

=> create table foo(a int);
CREATE TABLE
=> create index foo_a_idx on foo(a);
CREATE INDEX
=> select max(a), generate_series(1,2) as g from foo order by g desc;
 max | g 
-+---
 | 2
 | 1
(2 rows)

=> explain select max(a), generate_series(1,2) as g from foo order by g
desc;
   QUERY
PLAN   

 Sort  (cost=0.06..0.06 rows=1 width=0)
   Sort Key: (generate_series(1, 2))
   InitPlan
 ->  Limit  (cost=0.00..0.03 rows=1 width=4)
   ->  Index Scan Backward using foo_a_idx on foo
(cost=0.00..80.25 rows=2400 width=4)
 Filter: (a IS NOT NULL)
   ->  Result  (cost=0.00..0.01 rows=1 width=0)
(7 rows)

=> set enable_indexscan=f;
SET
=> select max(a), generate_series(1,2) as g from foo order by g desc;
ERROR:  set-valued function called in context that cannot accept a set
=>  explain select max(a), generate_series(1,2) as g from foo order by g
desc;
QUERY PLAN 
---
 Sort  (cost=40.02..40.03 rows=1 width=4)
   Sort Key: (generate_series(1, 2))
   ->  Aggregate  (cost=40.00..40.02 rows=1 width=4)
 ->  Seq Scan on foo  (cost=0.00..34.00 rows=2400 width=4)


I believe this is related to this commit:

Date:   Mon Mar 31 16:59:33 2008 +

Apply my original fix for Taiki Yamaguchi's bug report about 
DISTINCT MAX().
Add some regression tests for plausible failures in this area.

However, that commit actually added a test case, which confuses me. I'm
not really sure what the behavior is supposed to be, but the output
shouldn't depend on the optimizer.

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] non-deterministic error related to MIN/MAX optimization

2008-08-25 Thread Jeff Davis
On Mon, 2008-08-25 at 22:26 -0400, Tom Lane wrote: 
> Jeff Davis <[EMAIL PROTECTED]> writes:
> > => select max(a), generate_series(1,2) as g from foo order by g desc;
> > ERROR:  set-valued function called in context that cannot accept a set
> 
> This strikes me as a pretty useless query, so the fact that it doesn't
> work doesn't bother me.  It's mostly accidental that there are any
> variants that do work, I think.  Why would you want a SRF in a sort key?

The following line was added to the regression tests:

aggregates.sql:226:
select max(unique2), generate_series(1,3) as g from tenk1 order by g
desc;

I have no argument with what you say above. But one of my colleagues at
Truviso was doing some experiments, and it was causing a regression
failure here. I should have been more clear.

So if it truly is a useless query, shouldn't we at least remove the
regression test?

Regards,
Jeff Davis




-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] non-deterministic error related to MIN/MAX optimization

2008-08-26 Thread Jeff Davis
On Tue, 2008-08-26 at 01:04 -0400, Tom Lane wrote:
> Please provide some more detail about those experiments.  The test case
> hasn't been seen to fail in the buildfarm, AFAIR.

Dan Farina, my colleague at Truviso, was experimenting with some query
transformations that pushed the range table entries down into a
subquery.

You can see the effect here:

=> select max(a), generate_series(1,2) as g from foo;
 max | g 
-+---
 | 1
 | 2
(2 rows)

=> -- make "foo" into a subquery and add a no-op
=> -- to prevent it from pulling up the subquery
=> select max(a), generate_series(1,2) as g from (select a as a from foo
offset 0) dummy;
ERROR:  set-valued function called in context that cannot accept a set

So, although Dan's transformations were semantically correct, they ended
up causing this regression failure.

It doesn't have anything to do with the ORDER BY, so that part of my
example was unnecessary.

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #4418: Memory leak in query planner

2008-09-16 Thread Jeff Davis
On Tue, 2008-09-16 at 00:37 +, michael McMaster wrote:
> I intend on creating a view over a large number of tables (possibly > 1000).
>  Each table in the view has a CHECK constraint that partitions the data by a
> time range.

You may try using the inheritance mechanism rather than a UNION ALL
view.

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] index bloat in 8.4-dev

2008-10-06 Thread Jeff Davis
I am seeing index bloat in the current head when the indexed values are
constantly increasing, and the lower values are being constantly
deleted.

I attached a simple python script that can be fed into psql. If this is
done on 8.3.3, the number of index relpages stays fairly constant,
around 500, and doesn't change after that.

If I do this against head, then the index relpages explode, and I
stopped the test when it reached about 46k pages for about 100k live
rows.

It's possible that this has something to do with the new FSM. I checked
out a release just before the new FSM, and I did see some bloat,
reaching about 3-4k index pages for 100k live rows (which is still worse
than 8.3.3), but it did not appear to happen nearly as fast. I don't
think it's entirely the FSM though, because 8.3.3 showed no problem at
all. Perhaps the FSM just makes some problem introduced in 8.4 more
apparent?

Regards,
Jeff Davis
i = 1

print "set search_path = mytest, public;"
print "drop table if exists foo;"
print "create table foo (i int unique);"
while 1:
	print "INSERT INTO foo select generate_series(%d, %d);" % (i, i + 9)
	print "DELETE FROM foo WHERE i < %d;" % (i)
	print "VACUUM foo;"
	i += 10


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] index bloat in 8.4-dev

2008-10-06 Thread Jeff Davis
On Mon, 2008-10-06 at 11:10 +0300, Heikki Linnakangas wrote:
> Jeff Davis wrote:
> > I am seeing index bloat in the current head when the indexed values are
> > constantly increasing, and the lower values are being constantly
> > deleted.
> > 
> > ...
> > It's possible that this has something to do with the new FSM.
> 
> Yep, it clearly has. Looks like I forgot about the index FSMs when I 
> added the FSM vacuum code. Index FSMs need to be vacuumed just like the 
> heap FSM. Fixed, thanks for the report.

There is some other bug still at work here. I am still seeing some
pretty severe bloat with the same script I attached in the previous
email. This is the current head (including your patch).

Note that these results are after running my script for about an hour
(although you can probably see the effects after 10 minutes), and my
script has a VACUUM after every DELETE.

I see this problem on 8.3.3 now, too. Originally, I suppose my test was
not long enough, but now I see the problem after about 10 minutes of
running.

Regards,
Jeff Davis

-- results from current head
=> select relpages from pg_class where relname = 'foo_i_key';
 relpages 
--
35255
(1 row)

=> select relpages from pg_class where relname = 'foo_i_key';
 relpages 
--
35255
(1 row)

=> select count(*) from mytest.foo;
 count  

 10
(1 row)

=> vacuum verbose  mytest.foo;
INFO:  vacuuming "mytest.foo"
INFO:  index "foo_i_key" now contains 101163 row versions in 35255 pages
DETAIL:  0 index row versions were removed.
35029 index pages have been deleted, 35029 are currently reusable.
CPU 0.11s/0.07u sec elapsed 0.20 sec.
INFO:  "foo": found 0 removable, 101163 nonremovable row versions in 785
pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 98837 unused item pointers.
0 pages are entirely empty.
CPU 0.11s/0.08u sec elapsed 0.21 sec.
VACUUM

=> select min(i), max(i) from mytest.foo;
min|max
---+---
 20511 | 20520
(1 row)





-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] index bloat in 8.4-dev

2008-10-07 Thread Jeff Davis
On Tue, 2008-10-07 at 11:04 +0300, Heikki Linnakangas wrote:
> Jeff Davis wrote:
> > I see this problem on 8.3.3 now, too. Originally, I suppose my test was
> > not long enough, but now I see the problem after about 10 minutes of
> > running.
> 
> I ran the script for about 30 minutes on CVS HEAD, and the index didn't 
> grow at all after the first three iterations. Are you sure you didn't 
> have a long-running transaction open that prevented vacuum from working? 
> Did the heap bloat as well, or just the index?

In the VACUUM VERBOSE output I included, you can see that the heap is
only 785 pages (after 200M rows went through that table), and it
maintains that consistently. That means to me that the VACUUMs are
running and properly freeing the space in the heap. And the output seems
to indicate that it thinks it can re-use those index pages, it just
doesn't. 

This is a bit non-deterministic: the index pages will be steady for a
while, and then jump all of a sudden. Maybe it only happens when
resources are strained? I will try on a few other machines today and see
if I can identify the conditions a little more clearly.

Regards,
Jeff Davis



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] index bloat in 8.4-dev

2008-10-07 Thread Jeff Davis
On Tue, 2008-10-07 at 08:14 -0700, Jeff Davis wrote:
> In the VACUUM VERBOSE output I included, you can see that the heap is
> only 785 pages (after 200M rows went through that table), and it
> maintains that consistently. That means to me that the VACUUMs are
> running and properly freeing the space in the heap. And the output seems
> to indicate that it thinks it can re-use those index pages, it just
> doesn't. 

I think something may have been flawed in my simpler tests, so ignore
this.

I did see some kind of problem here in a more complex case, but I think
I'll have to narrow it down again.

    Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #4491: regression in gist indexes

2008-10-25 Thread Jeff Frost

The following bug has been logged online:

Bug reference:  4491
Logged by:  Jeff Frost
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.3.4
Operating system:   Fedora 9/Gentoo/Mac OS X
Description:regression in gist indexes
Details: 

It seems that 8.3.4 has a regression in the updating of gist indexes.  After
updating an indexed column in a row with a postgis gist index, the row is no
longer found in an index scan.  I have verified that this works on 8.2.7,
8.3.0, 8.3.1 and 8.3.3 but not 8.3.4.
Verified this is a problem on Fedora 9/Gentoo/Mac OS X.

You can see the test case here:

https://gist.github.com/d6c9b183196717d73b6a

Interestingly, the first index scan after the update does find the row, but
subsequent scans do not.

Here you can see it working on 8.3.0:

https://gist.github.com/d0dbbf29606822b8ceb9

You can grab the test table data dump here:

http://www.frostconsultingllc.com/coordinate_test.dmp

Please contact me if there's anything else you need to reproduce the bug. 
Note that if you're using the coordinate_test data, you'll have to set
enable_seqscan = 0 to force an index scan as the table only contains 100
rows.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #4491: regression in gist indexes

2008-10-25 Thread Jeff Frost

Looks like this is a dup of #4479:

http://archives.postgresql.org/pgsql-bugs/2008-10/msg00094.php

-- Forwarded message --
Date: Wed, 22 Oct 2008 19:11:51 -0300
From: [EMAIL PROTECTED]
To: Jeff Frost <[EMAIL PROTECTED]>
Subject: Stalled post to pgsql-bugs

Your message to pgsql-bugs has been delayed, and requires the approval
of the moderators, for the following reason(s):

The author ("Jeff Frost" <[EMAIL PROTECTED]>)
  is not a member of any of the restrict_post groups.

If you do not wish the message to be posted, or have other concerns,
please send a message to the list owners at the following address:
  [EMAIL PROTECTED]--- Begin Message ---

The following bug has been logged online:

Bug reference:  4491
Logged by:  Jeff Frost
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.3.4
Operating system:   Fedora 9/Gentoo/Mac OS X
Description:regression in gist indexes
Details: 

It seems that 8.3.4 has a regression in the updating of gist indexes.  After
updating an indexed column in a row with a postgis gist index, the row is no
longer found in an index scan.  I have verified that this works on 8.2.7,
8.3.0, 8.3.1 and 8.3.3 but not 8.3.4.
Verified this is a problem on Fedora 9/Gentoo/Mac OS X.

You can see the test case here:

https://gist.github.com/d6c9b183196717d73b6a

Interestingly, the first index scan after the update does find the row, but
subsequent scans do not.

Here you can see it working on 8.3.0:

https://gist.github.com/d0dbbf29606822b8ceb9

You can grab the test table data dump here:

http://www.frostconsultingllc.com/coordinate_test.dmp

Please contact me if there's anything else you need to reproduce the bug. 
Note that if you're using the coordinate_test data, you'll have to set
enable_seqscan = 0 to force an index scan as the table only contains 100
rows.
--- End Message ---

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] possible array parsing bug

2008-11-11 Thread Jeff Davis

This does not look right to me:

=# select regexp_split_to_array('dsf,sdfsdf',',')::text[][100]; 
 regexp_split_to_array 
---
 {dsf,sdfsdf}
(1 row)

Is this known?

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Status of issue 4593

2009-01-05 Thread Jeff Davis
On Mon, 2009-01-05 at 09:03 -0600, Lee McKeeman wrote:
> I did not see anything that indicated to me that order by may not be
> handled properly at the read committed isolation level, so I do believe
> this to be erroneous behavior, and therefore a bug. I have attempted
> this in 8.3.4 and
> 8.2.6 as I have ready access to installations of these versions. I can
> likely get access to an 8.3.5 installation if necessary for this bug to
> be investigated, but don't have one available to me at this time.

This looks like a bug to me, as well. Transaction isolation affects
visibility of tuples, but ORDER BY should still work. Your example also
works if using FOR SHARE in connection 2.

The manual does have this to say about FOR UPDATE/SHARE:

"It is possible for a SELECT command using both LIMIT and FOR
UPDATE/SHARE clauses to return fewer rows than specified by LIMIT. This
is because LIMIT is applied first. The command selects the specified
number of rows, but might then block trying to obtain lock on one or
more of them. Once the SELECT unblocks, the row might have been deleted
or updated so that it does not meet the query WHERE condition anymore,
in which case it will not be returned."
  -- http://www.postgresql.org/docs/8.3/static/sql-select.html

I'm sure something very similar is happening with ORDER BY, so it should
be documented at a minimum.

However, I think we should consider your issue more serious, because I
think this it a violation of the SQL standard. I've been wrong about the
SQL standard plenty of times though, so don't take my word for it ;)

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Status of issue 4593

2009-01-05 Thread Jeff Davis
On Mon, 2009-01-05 at 15:42 -0500, Tom Lane wrote:
> The only way to avoid this would be to lock before the sort, which could
> have the effect of locking more rows than are returned (if you also use
> LIMIT);

How would that work in the case of an index scan sort?

Regards,
    Jeff Davis



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Weird quirk with pg_dump of complex types

2009-02-26 Thread Jeff Davis
On Thu, 2009-02-26 at 15:25 -0800, Josh Berkus wrote:
> That is, a custom type in a complex type declaration is explicitly 
> schema-qualified, even when the schema in question is in the default 
> schema_path.  This is inconsistent with all other database objects, 
> which use "SET search_path" to qualify the correct schemas.
> 

Strange. However, I don't think setting the search path will suffice,
because the different types could come from different schemas in a way
that makes it impossible.

Functions are similar, actually. The argument list needs to specify
schema paths as well, if it's not in some expected place (I think it
does so for all schemas other than pg_catalog).

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Weird quirk with pg_dump of complex types

2009-02-26 Thread Jeff Davis
On Thu, 2009-02-26 at 15:52 -0800, Josh Berkus wrote:
> Jeff,
> 
> > Functions are similar, actually. The argument list needs to specify
> > schema paths as well, if it's not in some expected place (I think it
> > does so for all schemas other than pg_catalog).
> 
> Except that they don't appear to do so.

Here is the case I'm talking about:

postgres=# create schema a;
CREATE SCHEMA
postgres=# create type a.int4 as (i pg_catalog.int4);
CREATE TYPE
postgres=# create function f1(x a.int4, y pg_catalog.int4) returns
pg_catalog.int4 language sql as $$ select 1; $$;
CREATE FUNCTION

-- pg_dump output:

SET search_path = public, pg_catalog;

...

CREATE FUNCTION f1(x a.int4, y integer) RETURNS integer
LANGUAGE sql
AS $$ select 1; $$;


So, there are some special cases somewhere so that the pg_dump output
isn't littered with unreadable "pg_catalog.int4" everywhere.

In the general case though, for any object that refers to multiple other
objects, I don't see any way around explicit schema qualification. I
suppose it could be smart and say "foo_type is unique in my search path,
so I don't need to schema-qualify it".

Have you considered working from the "custom" format rather than text?
I'm not sure whether it solves your problem, but I think it provides the
most information.

Regards,
Jeff Davis



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Weird quirk with pg_dump of complex types

2009-02-26 Thread Jeff Davis
On Thu, 2009-02-26 at 21:42 -0800, Josh Berkus wrote:
> Jeff,
> 
> > In the general case though, for any object that refers to multiple other
> > objects, I don't see any way around explicit schema qualification. I
> > suppose it could be smart and say "foo_type is unique in my search path,
> > so I don't need to schema-qualify it".
> 
> Yeah, but for most other objects "public" is also excluded as well as 
> pg_catalog.  For CREATE TYPE, "public" is explicit.
> 


Ah, I see what you mean. Here's what I get when the type is named
public.integer (pg_dump output):

CREATE FUNCTION f1(x "integer", y integer) RETURNS integer
LANGUAGE sql
AS $$ select 1; $$;

and here's what I get when I try to be creative, and I define a
public.tsvector type (pg_dump output):

CREATE FUNCTION f1(x tsvector, y pg_catalog.tsvector) RETURNS integer
LANGUAGE sql
AS $$ select 1; $$;

It seems like pg_dump tries fairly hard to make the output readable in
the typical case. It does seem a little inconsistent that the list of
types that make up another type don't follow the exact same rules; I
don't know the reason for that.

Is using the custom format a possibility?

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Weird quirk with pg_dump of complex types

2009-02-26 Thread Jeff Davis
On Fri, 2009-02-27 at 01:24 -0500, Tom Lane wrote:
> Are you entirely sure that they don't?

Oh, you're right, of course:

postgres=# create type public.mytype as (i int);
CREATE TYPE
postgres=# create type public.mytype2 as (j mytype);
CREATE TYPE

-- pg_dump output:
CREATE TYPE mytype2 AS (
j mytype
);

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] "pg_ctl -m fast stop" doesn't shut down with active COPY

2009-03-12 Thread Jeff Davis
This is a repost of:

http://archives.postgresql.org/pgsql-general/2009-03/msg00434.php

I didn't get any response, and it appears to be in contradiction to the
docs:

"SIGINT -- The server disallows new connections and sends all existing
server processes SIGTERM, which will cause them to abort their current
transactions and exit promptly."

http://www.postgresql.org/docs/8.3/static/server-shutdown.html

If you have an open COPY and no data is moving, it simply won't
terminate it. You can terminate it with ctrl-C from psql, but not a
SIGINT to the postmaster or a SIGINT or SIGTERM to the backend.

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] possible bug not in open items

2009-03-27 Thread Jeff Davis
On Thu, 2009-03-26 at 21:45 -0400, Bruce Momjian wrote:
> > http://archives.postgresql.org/pgsql-bugs/2009-03/msg00062.php
> > 
> > It may or may not be a real bug, but I didn't receive any response. If
> > you think it might be a bug, can you please add it to the open items?
> 
> Hmm, odd I don't have it either;  can you repost it?

The docs say:

"SIGINT -- The server disallows new connections and sends all existing
server processes SIGTERM, which will cause them to abort their current
transactions and exit promptly."

http://www.postgresql.org/docs/8.3/static/server-shutdown.html

If you have an open COPY and no data is moving, it simply won't
terminate it. You can terminate it with ctrl-C from psql, but not a
SIGINT to the postmaster or a SIGINT or SIGTERM to the backend.

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] possible bug not in open items

2009-03-27 Thread Jeff Davis
On Fri, 2009-03-27 at 15:43 -0400, Tom Lane wrote:
> Heikki Linnakangas  writes:
> > I'm not too familiar with this code, but I think we could just enable 
> > ImmediateInterruptOK in CopyGetData().
> 
> Only if you are wanting to break things.
> 
> The reason we don't allow client read to be interrupted is the fear of
> losing protocol sync on an incomplete message.  For the SIGTERM case
> this would (probably) be okay, since we aren't going to pay any more
> attention to the client anyway, but accepting SIGINT there is right out.
> 

That's perfectly acceptable to me. I'm only concerned about the shutdown
case, and that's the only case that's in conflict with the docs.

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] possible bug not in open items

2009-03-27 Thread Jeff Davis
On Fri, 2009-03-27 at 15:43 -0400, Tom Lane wrote:
> Heikki Linnakangas  writes:
> > I'm not too familiar with this code, but I think we could just enable 
> > ImmediateInterruptOK in CopyGetData().
> 
> Only if you are wanting to break things.
> 

Doesn't DoingCommandRead protect us in the SIGINT case?

Regards,
Jeff Davis




-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] possible bug not in open items

2009-04-09 Thread Jeff Davis
On Thu, 2009-04-09 at 12:59 -0400, Tom Lane wrote:
> Bruce Momjian  writes:
> > Where are we on this?
> 
> Pretty much nowhere --- there's no proposed patch, and I don't think
> it's exactly trivial.  Do you want to put it on TODO?

Here is a patch that does what I think Heikki was suggesting. If a
proper fix is non-trivial, then I assume there's some problem with my
patch, but I'll post it for the archives anyway.

I don't see any obvious protocol synchronization problem, and it looks
like DoingCommandRead protects against that in the case of SIGINT to a
backend. And in the case of SIGTERM to a backend, the connection will be
terminated anyway.

Regards,
Jeff Davis
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index c8223bf..c0d3622 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -487,6 +487,17 @@ static int
 CopyGetData(CopyState cstate, void *databuf, int minread, int maxread)
 {
 	int			bytesread = 0;
+	bool		old_ImmediateInterruptOK = ImmediateInterruptOK;
+
+	/*
+	 * For the duration of CopyGetData, we want to allow immediate
+	 * interrupts so that a shutdown can still occur even if the
+	 * client stalls in the middle of a COPY. This won't affect query
+	 * cancellation, because DoingCommandRead will still be true if in
+	 * the middle of receiving a message, which is important to avoid
+	 * protocol synchronization problems.
+	 */
+	ImmediateInterruptOK = true;
 
 	switch (cstate->copy_dest)
 	{
@@ -542,6 +553,7 @@ CopyGetData(CopyState cstate, void *databuf, int minread, int maxread)
 		case 'c':		/* CopyDone */
 			/* COPY IN correctly terminated by frontend */
 			cstate->fe_eof = true;
+			ImmediateInterruptOK = old_ImmediateInterruptOK;
 			return bytesread;
 		case 'f':		/* CopyFail */
 			ereport(ERROR,
@@ -578,6 +590,8 @@ CopyGetData(CopyState cstate, void *databuf, int minread, int maxread)
 			break;
 	}
 
+	ImmediateInterruptOK = old_ImmediateInterruptOK;
+
 	return bytesread;
 }
 

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] possible bug not in open items

2009-04-10 Thread Jeff Davis
On Fri, 2009-04-10 at 14:47 -0400, Tom Lane wrote:
> This patch is so wrong that it's scary.  You can't have
> ImmediateInterruptOK true over the duration of any significant amount of
> backend processing --- as an example, if you take control away in the
> middle of a malloc call, you'll probably be left with a corrupt malloc
> arena.
> 

Thank you for the explanation. My initial thinking was that either
DoingCommandRead would protect us (for SIGINT to the backend), or we
were going to terminate the process anyway (for SIGTERM). But it sounds
like it leaves us in a state so unsafe that we can't even abort the
transaction nicely.

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] distinct on doesn't fail without order by? why?

2009-05-18 Thread Jeff Davis
On Mon, 2009-05-18 at 20:24 +0200, hubert depesz lubaczewski wrote:
> I was under impression that select distinct on (xx) ...
> will fail if xx doesn't match the left most part of order by. i.e. it
> requires order by xx, while allowing order by xx, something, else.
> 
> But it seems you can run the query with no order by clause at all.
> 
> is it intentional?

This is documented behavior:

"Note that the 'first row' of each set is unpredictable unless ORDER BY
is used to ensure that the desired row appears first."

http://www.postgresql.org/docs/8.4/static/sql-select.html#SQL-DISTINCT

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Cursor with hold emits the same row more than once across commits in 8.3.7

2009-06-09 Thread Jeff Davis
On Tue, 2009-06-09 at 12:07 -0400, Tom Lane wrote:
> We could probably fix this specific issue by refactoring things in such
> a way that the seqscan start point is frozen on the first read and
> re-used after rewinds.

I don't know what you mean by "frozen" exactly, but the start point of a
synchronized scan is stored in shared memory; otherwise, it wouldn't
know where to stop.

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Cursor with hold emits the same row more than once across commits in 8.3.7

2009-06-09 Thread Jeff Davis
On Tue, 2009-06-09 at 10:51 -0700, Jeff Davis wrote:
> On Tue, 2009-06-09 at 12:07 -0400, Tom Lane wrote:
> > We could probably fix this specific issue by refactoring things in such
> > a way that the seqscan start point is frozen on the first read and
> > re-used after rewinds.
> 
> I don't know what you mean by "frozen" exactly, but the start point of a
> synchronized scan is stored in shared memory; otherwise, it wouldn't
> know where to stop.
> 

Correction: I didn't actually mean _shared_ memory there. It's just
backend-local memory.

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #4952: commit_delay ignored because CountActiveBackends always returns zero

2009-07-29 Thread Jeff Janes

The following bug has been logged online:

Bug reference:  4952
Logged by:  Jeff Janes
Email address:  jeff.ja...@gmail.com
PostgreSQL version: 8.4.0
Operating system:   Linux 2.4.21-15.0.3.ELsmp
Description:commit_delay ignored because CountActiveBackends always
returns zero
Details: 

I've found that commit_delay never has an effect.

By instrumenting src/backend/access/transam/xact.c, I see that this is
because CountActiveBackends always returns zero.  

This seems to be a bug introduced by: 
http://archives.postgresql.org/pgsql-committers/2009-03/msg00259.php

into file src/backend/storage/ipc/procarray.c

I believe the source of the bug is the addition to that file of

+   if (proc != NULL)
+   continue;

The sense of this test should be inverted, as it is NULLs, not non-nulls
that need to be skipped.

Due to this test all truly active backends get skipped, so
CountActiveBackends() always returns zero.  Also, I believe without evidence
that the change fails to correct the (hard to reproduce) bug it was
originally introduced to correct, as a proc that is NULL does not get
skipped, and goes on to be dereferenced.

If I change this code to:

+   if (proc == NULL)
+   continue;

Then I find that commit_delay now does have an effect. 

(The effect is to make "pgbench -c 15" slower when commit_delay is set to
the max value of 10.  This is what I thought would happen, and was
surprised when it originally did not.)

thanks,

Jeff

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #4965: missing tests in tools/fsync/test_fsync.c

2009-08-05 Thread Jeff Janes

The following bug has been logged online:

Bug reference:  4965
Logged by:  Jeff Janes
Email address:  jeff.ja...@gmail.com
PostgreSQL version: 8.4.0
Operating system:   Linux
Description:missing tests in tools/fsync/test_fsync.c
Details: 

In the part that implements "Compare file sync methods with one 8k write",
the #ifdef OPEN_SYNC_FLAG code
is nested within the #ifdef OPEN_DATASYNC_FLAG code.

This causes o_sync to be skipped if o_dsync is unavailable, but only for
this particular section (the section with 2 8k writes doesn't have this
problem.)

Also, the statement that prints the "Compare file sync methods with one 8k
write" section title is up in the #ifdef block of a previous section, where
it might be omitted on systems without an o_sync.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] pg_ctl infinite loop and memory leak

2009-09-01 Thread Jeff Davis
To reproduce:

1. initdb -D data
2. cat /dev/null > data/postgresql.conf
3. pg_ctl -w -D data start

I attached a quick patch that seems to do the trick. It appears that
fgets() will always return non-NULL if the size passed in is 1 (i.e.
maxlength in the caller is 0).

The patch also changes the same readfile() function in initdb.c. I
assume it's not a practical problem there, but it should be fixed.

Thanks to Corry Haines (chaines at truviso dot com) for reporting the
problem.

Regards,
Jeff Davis
diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c
index 4b0b723..e544e3e 100644
--- a/src/bin/initdb/initdb.c
+++ b/src/bin/initdb/initdb.c
@@ -409,11 +409,10 @@ readfile(char *path)
 
 	rewind(infile);
 	nlines = 0;
-	while (fgets(buffer, maxlength + 1, infile) != NULL)
-	{
-		result[nlines] = xstrdup(buffer);
-		nlines++;
-	}
+
+	if (maxlength > 0)
+		while (fgets(buffer, maxlength + 1, infile) != NULL)
+			result[nlines++] = xstrdup(buffer);
 
 	fclose(infile);
 	free(buffer);
diff --git a/src/bin/pg_ctl/pg_ctl.c b/src/bin/pg_ctl/pg_ctl.c
index 08e38e7..ede6e5b 100644
--- a/src/bin/pg_ctl/pg_ctl.c
+++ b/src/bin/pg_ctl/pg_ctl.c
@@ -329,8 +329,10 @@ readfile(const char *path)
 	/* now reprocess the file and store the lines */
 	rewind(infile);
 	nlines = 0;
-	while (fgets(buffer, maxlength + 1, infile) != NULL)
-		result[nlines++] = xstrdup(buffer);
+
+	if (maxlength > 0)
+		while (fgets(buffer, maxlength + 1, infile) != NULL)
+			result[nlines++] = xstrdup(buffer);
 
 	fclose(infile);
 	free(buffer);

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5028: CASE returns ELSE value always when type is "char"

2009-09-01 Thread Jeff Davis
On Tue, 2009-09-01 at 13:49 -0500, Kevin Grittner wrote:
> I figured that; I'm just trying to understand what seems to me like an
> odd wart on the type system.  I figure I must be missing something
> important, so I'd kinda like to find out what that is.

If I understand your question, you're comparing:

  (a) leaving a literal as "unknown" until you've finished inferring 
  types (current behavior)
  (b) casting every unknown to text immediately, and then trying to 
  infer the types

In general, option (b) eliminates information that might be useful for
making good inferences about the correct operators to use, and also
finding cases of ambiguity.

For instance, consider the current behavior:

  1. select now()::text < 'January 01, 2009'; -- true
  2. select now()   < 'January 01, 2009'; -- false
  3. select now()   < 'January 01, 2009'::text;
 ERROR:  operator does not exist: timestamp with time zone < text

Example #2 shows that we can infer the the RHS is of type timestamptz
based on the type of the LHS. That's desirable behavior in any
type-inferencing system -- without it you might as well just explicitly
cast all literals. Example #3 is ambiguous: we have no way to know
whether to choose "< (timestamptz, timestamptz)" or "< (text, text)",
and an ERROR is desirable behavior to avoid confusing results.

But you can't have both of those desirable behaviors unless you are
somehow aware that "'January 01, 2009'" is something more malleable than
"now()" in example #2. Calling the RHS "unknown" in example #2 gives us
that information.

Regards,
Jeff Davis



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] pg_ctl infinite loop and memory leak

2009-09-01 Thread Jeff Davis
On Tue, 2009-09-01 at 22:01 -0400, Tom Lane wrote:
> Huh, interesting corner case.  I'd be inclined to fix by initializing
> maxlength to 1 though.
> 
> Where's the memory leak?

The xstrdup() on the zero-length string.

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5028: CASE returns ELSE value always when type is"char"

2009-09-02 Thread Jeff Davis
On Wed, 2009-09-02 at 08:57 -0500, Kevin Grittner wrote:
> >   (a) leaving a literal as "unknown" until you've finished
> >   inferring types (current behavior)
> >   (b) casting every unknown to text immediately, and then trying to 
> >   infer the types
>  
> No, that's not it.  I'm wondering why it isn't treated as text. 
> Period.  Full stop.  Nothing to infer.  Anywhere that we have implicit
> casts defined from text to something else could, of course, still
> operate; but it would be text.  No guessing.

If you have very many implicit casts, I think you lose the
predictability and safety you're looking for, and/or end up with a lot
of errors that eliminate the convenience of implicit casting.
 
> It often seems to have the opposite effect.  See the original post.

The original problem has more to do with the fact that interpreting an
unknown value as a char seems to just discard a lot of information. I
assume that's part of the standard, but it seems like a bad idea any
time you silently discard data (which is why we prevented varchar(n)
from silently truncating a while ago).

> Here I think you have answered my question.  It is seen as a feature,
> since it allows people to avoid the extra keystrokes of coding
> type-specific literal values, and allows them the entertainment of
> seeing how the values get interpreted.  :-)
>  
> > But you can't have both of those desirable behaviors
>  
> Whether they are desirable is the point of disagreement.  At least I
> now understand the reasoning.

They are desirable for a system that infers types from context. I agree
that there's more safety by explicitly declaring the type of all
literals; but I disagree that using implicit casts to make up for a lack
of an "unknown" type will improve matters (either for convenience or
safety).

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5028: CASE returns ELSE value always when type is "char"

2009-09-04 Thread Jeff Davis
On Fri, 2009-09-04 at 11:15 -0400, Tom Lane wrote:
> On the whole, throwing an error seems better from a usability
> perspective.
> 
> Comments?

Agreed.

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] strange bug with gist over box and circle

2009-09-15 Thread Jeff Davis
If I create a gist index over a box and a circle, the index attributes
appear to both have type box.

I don't see any other, similar situations with other types, and I
haven't investigated the cause yet. Most similar situations work fine.

Regards,
    Jeff Davis


postgres=# select version();

version
--
 PostgreSQL 8.5alpha1 on x86_64-unknown-linux-gnu, compiled by GCC gcc
(Debian 4.3.4-2) 4.3.4, 64-bit
(1 row)

postgres=# create table foo5(b box, c circle);
CREATE TABLE
postgres=# create index foo5_idx on foo5 using gist (b,c);
CREATE INDEX
postgres=# \d foo5
 Table "public.foo5"
 Column |  Type  | Modifiers 
++---
 b  | box| 
 c  | circle | 
Indexes:
"foo5_idx" gist (b, c)

postgres=# \d foo5_idx
  Index "public.foo5_idx"
 Column | Type | Definition 
+--+
 b  | box  | b
 c  | box  | c
gist, for table "public.foo5"




-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] strange bug with gist over box and circle

2009-09-16 Thread Jeff Davis
On Wed, 2009-09-16 at 10:36 -0400, Tom Lane wrote:
> This is expected, no?  Those opclasses use the STORAGE option.

I see, that makes sense. I was making the assumption that the types
matched in my new patch, and obviously that's incorrect.

Regards,
Jeff Davis




-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5154: ERROR: cannot assign non-composite value to a row variable

2009-10-30 Thread Jeff Shanab

The following bug has been logged online:

Bug reference:  5154
Logged by:  Jeff Shanab
Email address:  jsha...@earthlink.net
PostgreSQL version: 8.3.5
Operating system:   Linux
Description:ERROR:  cannot assign non-composite value to a row
variable
Details: 

As discussed on the irc. I had a problem with a utility function that was
being passed a NEW row and a null for the OLD row. The error was created
when it tries to store the row variable in the local variables. RhodiumToad
on the list provided this simple test.

create type foo1 as (a integer, b text);
CREATE TYPE
create type foo2 as (c integer, d foo1);
CREATE TYPE

create function foo() returns foo2 language plpgsql as $f$ declare v foo2;
begin v := null; return v; end; $f$;
CREATE FUNCTION

select foo();

ERROR: cannot assign non-composite value to a row variable

CONTEXT: PL/pgSQL function "foo" line 1 at assignment

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5157: Hash index not concurrency safe

2009-10-31 Thread Jeff Janes

The following bug has been logged online:

Bug reference:  5157
Logged by:  Jeff Janes
Email address:  jeff.ja...@gmail.com
PostgreSQL version: 8.4.1
Operating system:   Linux
Description:Hash index not concurrency safe
Details: 

Hash index is not concurrency safe, starting in REL8_4_0 and up to HEAD.

T1:

create table foo (id int, x text);
create index asdlfkjsdf on foo using hash (id);
insert into foo select 1, 'xxx' from generate_series(1,100);
set enable_seqscan =off;
set enable_bitmapscan =off;
\timing on
select count(pg_sleep(.3)) from foo where id=1;
 count
---
   100
(1 row)

Time: 30897.835 ms

select count(pg_sleep(.3)) from foo where id=1;

While that is running switch to T2:
insert into foo select generate_series, 'xxx' from
generate_series(1,10);

Back in T1:

 count
---
 8
(1 row)

Time: 2474.709 ms


The pg_sleep is simply there to give me time to force the two transactions
to overlap.

The problem is that hashgettuple releases the buffer content share lock when
it returns a tuple, so when it comes back to get another tuple the block may
have been rearranged by concurrent inserts.  But the state of the scan
object, specifically so->hashso_curpos, makes no attempt to detect or
correct for this rearragement.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5157: Hash index not concurrency safe

2009-11-01 Thread Jeff Janes
On Sun, Nov 1, 2009 at 8:52 AM, Tom Lane  wrote:
> "Jeff Janes"  writes:
>> Hash index is not concurrency safe, starting in REL8_4_0 and up to HEAD.
>
> Ouch.  This used to be okay, because adding new entries to a hash page
> always added them at the end.  The 8.4 changes to keep individual hash
> pages sorted by hashcode broke it :-(.
>
> I think we could recover by having the hashgettuple code path
> re-synchronize by looking for the heap TID it previously returned.
> That must be at the same or higher index TID as we had stopped at.
> (Deletions are not possible, so we only have to search forward,
> and the TID must be there someplace.)

Can it get pushed to another page (an overflow page)?  My quick
reading of the code suggests it can't get pushed, which makes the fix
easier.

I'll work on a fix for it.  But if 8.4.2 is coming out in the next
couple of weeks and we want the fix to be in it, then we might want
someone more proficient than me to work on it.

Cheers,

Jeff

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Status of submitted bugs

2010-01-28 Thread Jeff Davis
On Thu, 2010-01-28 at 18:15 +0100, Giorgio Valoti wrote:
> Ok, sorry for the silly question but is the xml core support  
> comparable to contrib/xml2? Is it already there in 8.4 or is scheduled  
> for 8.5?

It's available in 8.3 and later. The documentation can be found here:

http://www.postgresql.org/docs/8.4/static/datatype-xml.html
http://www.postgresql.org/docs/8.4/static/functions-xml.html

Regards,
Jeff Davis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] possible bug not in open items

2010-02-28 Thread Jeff Davis
On Thu, 2010-02-25 at 23:15 -0500, Bruce Momjian wrote:
> Was this ever addressed?
> 

It doesn't appear to be fixed, and I don't see it on the TODO, either.
Should we add it there?

Regards,
Jeff Davis

> ---
> 
> Jeff Davis wrote:
> > On Thu, 2009-03-26 at 21:45 -0400, Bruce Momjian wrote:
> > > > http://archives.postgresql.org/pgsql-bugs/2009-03/msg00062.php
> > > > 
> > > > It may or may not be a real bug, but I didn't receive any response. If
> > > > you think it might be a bug, can you please add it to the open items?
> > > 
> > > Hmm, odd I don't have it either;  can you repost it?
> > 
> > The docs say:
> > 
> > "SIGINT -- The server disallows new connections and sends all existing
> > server processes SIGTERM, which will cause them to abort their current
> > transactions and exit promptly."
> > 
> > http://www.postgresql.org/docs/8.3/static/server-shutdown.html
> > 
> > If you have an open COPY and no data is moving, it simply won't
> > terminate it. You can terminate it with ctrl-C from psql, but not a
> > SIGINT to the postmaster or a SIGINT or SIGTERM to the backend.
> > 
> > Regards,
> > Jeff Davis
> 



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Possible alpha5 SR bug

2010-04-12 Thread Jeff Davis
During the testing day organized a week ago, Quinn 
Weaver ran into what looks like a problem. I attached the log output at
the end of this email. Note that he was running a Mac, but replicating
from a Linux machine (both 64-bit). I know this is not a supported
configuration, but a segfault seems like a problem anyway.

Quinn helpfully provided a tarball of his data directory here:

http://fairpath.com/QuinnPgBug.tar.gz

and described his machine

"My machine is a Mac with an Intel Core 2 Duo processor (64-bit)
running Mac OS X 10.6.3.  It has 2 GB of RAM, which should be plenty
for the config we used."

I was trying to sort this bug out somewhat before posting, but we
weren't able to reproduce it (it happened near the end of testing, and
people were leaving), and I didn't have much chance to investigate in
the last week.

Regards,
Jeff Davis

postg...@tao:/usr/local/pgsql-9.0alpha5-build1/data/data9.0$ 
../../bin/postmaster -D /usr/local/pgsql-9.0alpha5-build1/data/data9.0
LOG:  database system was interrupted; last known up at 2010-04-03 16:55:20 PDT
Warning: Identity file /root/replicationkey not accessible: No such file or 
directory.
Could not create directory '/var/empty/.ssh'.
ssh_askpass: exec(/opt/local/libexec/ssh-askpass): No such file or directory
Host key verification failed.
LOG:  entering standby mode
LOG:  redo starts at 0/BCB8
Warning: Identity file /root/replicationkey not accessible: No such file or 
directory.
Could not create directory '/var/empty/.ssh'.
ssh_askpass: exec(/opt/local/libexec/ssh-askpass): No such file or directory
Host key verification failed.
LOG:  unexpected pageaddr 0/9B00 in log file 0, segment 189, offset 0
Warning: Identity file /root/replicationkey not accessible: No such file or 
directory.
Could not create directory '/var/empty/.ssh'.
ssh_askpass: exec(/opt/local/libexec/ssh-askpass): No such file or directory
Host key verification failed.
FATAL:  could not load library 
"/usr/local/pgsql-9.0alpha5-build1/lib/libpqwalreceiver.so": 
dlopen(/usr/local/pgsql-9.0alpha5-build1/lib/libpqwalreceiver.so, 10): Library 
not loaded: /usr/local/pgsql/lib/libpq.5.dylib
  Referenced from: 
/usr/local/pgsql-9.0alpha5-build1/lib/libpqwalreceiver.so
  Reason: no suitable image found.  Did find:
/Users/quinn/lib/libpq.5.dylib: stat() failed with errno=13
Warning: Identity file /root/replicationkey not accessible: No such file or 
directory.
Could not create directory '/var/empty/.ssh'.
ssh_askpass: exec(/opt/local/libexec/ssh-askpass): No such file or directory
Host key verification failed.
LOG:  unexpected pageaddr 0/9B00 in log file 0, segment 189, offset 0
Warning: Identity file /root/replicationkey not accessible: No such file or 
directory.
Could not create directory '/var/empty/.ssh'.
ssh_askpass: exec(/opt/local/libexec/ssh-askpass): No such file or directory
Host key verification failed.
LOG:  WAL receiver process (PID 1011) was terminated by signal 11: Segmentation 
fault
LOG:  terminating any other active server processes
postg...@tao:/usr/local/pgsql-9.0alpha5-build1/data/data9.0$ fg
bash: fg: current: no such job




-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] minor bug: unlisten

2000-05-13 Thread Jeff Davis

I looked at the TODO before sending this, and found
nothing regarding listen/unlisten.

I was unable to find the bug report sumission page at
www.postgresql.org, even though it was mentioned in
this template.

Your name   :   Jeff Davis
Your email address  :   [EMAIL PROTECTED]


System Configuration
-
  Architecture (example: Intel Pentium) :   Intel
Pentium

  Operating System (example: Linux 2.0.26 ELF)  :
Linux ELF (RedHat 6.2)

  PostgreSQL version (example: PostgreSQL-6.5.1):
PostgreSQL-7.0

  Compiler used (example:  gcc 2.8.0)   :   gcc 2.91.66


Please enter a FULL description of your problem:


Regarding UNLISTEN: 'UNLISTEN ' does not remove
LISTEN entries for '' in 'pg_listener' class,
nor does it prevent NOTIFYication. However, note that
'UNLISTEN *' does remove these entries, and prevent
NOTIFYication, as it should.

Please describe a way to repeat the problem.   Please
try to provide a concise reproducible example, if at
all possible: 
--

A full transcript (from 'psql' client) that
demonstrates this follows:

test=# listen test;
LISTEN
test=# notify test;
NOTIFY
Asynchronous NOTIFY 'test' from backend with pid
'17457' received.
test=# unlisten test;
UNLISTEN
test=# notify test;
NOTIFY
Asynchronous NOTIFY 'test' from backend with pid
'17457' received.
test=# unlisten *;
UNLISTEN
test=# notify test;
NOTIFY
test=#

End of transcript.

I tried this on two seperate machines, and it was the
same. They are almost identical in hardware, software,
and configuration.

Perhaps you could also execute a query such as 'SELECT
* FROM pg_listener', so as to view table status at
various points during the aforementioned procedure.
This has been omitted due to report lenth concerns.

If you know how this problem might be fixed, list the
solution below:
-

If 'unlisten ' were aliased to the query
"DELETE FROM pg_listener WHERE relname='',"
which works just fine, it would appear to solve the
problem, however I am not aware of performance issues
with this implementation, nor am I aware how the
current 'UNLISTEN' query is implemented.

Thank you,
Jeff Davis

__
Do You Yahoo!?
Send instant messages & get email alerts with Yahoo! Messenger.
http://im.yahoo.com/



[BUGS] Backend dies when overloading + operator for bool

2000-12-19 Thread Jeff Davis

If PostgreSQL failed to compile on your computer or you found a bug that

is likely to be specific to one platform then please fill out this form
and e-mail it to [EMAIL PROTECTED]

To report any other bug, fill out the form below and e-mail it to
[EMAIL PROTECTED]

If you not only found the problem but solved it and generated a patch
then e-mail it to [EMAIL PROTECTED] instead.  Please use the
command "diff -c" to generate the patch.

You may also enter a bug report at http://www.postgresql.org/ instead of

e-mail-ing this form.



POSTGRESQL BUG REPORT TEMPLATE




Your name   : Jeff Davis
Your email address  : [EMAIL PROTECTED]


System Configuration
-
  Architecture (example: Intel Pentium) : Intel Pentium

  Operating System (example: Linux 2.0.26 ELF)  : Linux 2.2.17 ELF

  PostgreSQL version (example: PostgreSQL-7.0):   PostgreSQL-7.0.3

  Compiler used (example:  gcc 2.8.0)   : gcc 2.95.2


Please enter a FULL description of your problem:

I wished to perform a query that would involve boolean value addition
(i.e. true + false + true would equal 2 as an integer). The reason was
to keep track of the number of matches (i.e. if conditions are seperated
by "OR" and if row matches 3 conditions, it would have be moved higher
in the order than a row matching only two conditions).

I tried the procedure below to make the proper functions (as a user, not
a superuser) and the backend died. A full error log is at the end of
this email.
 Whenever I try to use my operator, it dies.



Please describe a way to repeat the problem.   Please try to provide a
concise reproducible example, if at all possible:
--

create function bool_to_int(bool) returns int as 'select 1 as result
where $1 union select 0 as result where not $1;' language 'sql';

create function bool_plus_bool (bool,bool) returns int as 'select
bool_to_int($1) + bool_to_int($2) as result;' language 'sql';

create function int_plus_bool (int,bool) returns int as 'select ($1) +
bool_to_int($2) as result;' language 'sql';

create function bool_plus_int (bool,int) returns int as 'select
bool_to_int($1) + ($2) as result;' language 'sql';

create operator +( procedure = bool_plus_bool, leftarg = bool, rightarg
= bool);
create operator +( procedure = bool_plus_int, leftarg = bool, rightarg =
int);
create operator +( procedure = int_plus_bool, leftarg = int, rightarg =
bool);

select key,a,b,((key=1)+(a='hi')+(b='bye')) as m from v1; #kills backend

select (1=1)+(2=2) as a; #also kills backend

If you know how this problem might be fixed, list the solution below:
-

Sorry, I am not yet familiar enough with the internals.






### error log #

FindExec: found "/usr/local/pgsql/bin/postgres" using argv[0]
binding ShmemCreate(key=52e2c1, size=1104896)
DEBUG:  Data Base System is starting up at Tue Dec 19 10:19:01 2000
DEBUG:  Data Base System was shut down at Tue Dec 19 10:18:52 2000
DEBUG:  Data Base System is in production state at Tue Dec 19 10:19:01
2000
proc_exit(0)
shmem_exit(0)
exit(0)
/usr/local/pgsql/bin/postmaster: reaping dead processes...
/usr/local/pgsql/bin/postmaster: ServerLoop:  handling reading 4
/usr/local/pgsql/bin/postmaster: ServerLoop:  handling reading 4
/usr/local/pgsql/bin/postmaster: ServerLoop:  handling writing 4
/usr/local/pgsql/bin/postmaster: ServerLoop:  handling reading 5
/usr/local/pgsql/bin/postmaster: ServerLoop:  handling reading 5
/usr/local/pgsql/bin/postmaster: ServerLoop:  handling writing 5
/usr/local/pgsql/bin/postmaster: ServerLoop:  handling reading 5
/usr/local/pgsql/bin/postmaster: ServerLoop:  handling reading 5
/usr/local/pgsql/bin/postmaster: ServerLoop:  handling writing 5
/usr/local/pgsql/bin/postmaster: BackendStartup: environ dump:
-
 PWD=/usr/local/pgsql
 HOSTNAME=dynworks1
 MANPATH=:/usr/local/pgsql/man
 PS1=\h:\w\$
 USER=jdavis
 MACHTYPE=i386-pc-linux-gnu
 MAIL=/var/mail/jdavis
 LOGNAME=jdavis
 SHLVL=3
 SHELL=/bin/sh
 HOSTTYPE=i386
 OSTYPE=linux-gnu
 HOME=/var/lib/postgres
 TERM=xterm
 PGDATA=/usr/local/pgsql/data
 PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games
 SSH_TTY=/dev/pts/0
 _=/usr/local/pgsql/bin/postmaster
 OLDPWD=/home/jdavis
 POSTPORT=5432
 POSTID=2147483646
 IPC_KEY=5432000
 PG_USER=jdavis
-
/usr/local/pgsql/bin/postmaster: BackendStartup: pid 770 user jdavis db
jdavis socket 5
/usr/local/pgsql/bin/postmaster child[770]: starting 

[BUGS] date bug

2001-05-16 Thread Jeff Patterson
Title: date bug






Strange date behavior as shown below. 


create table "holidays" ("date" date NOT NULL,"name" varchar(25));

insert into holidays values ('01-01-2001'::date,'New Years');

insert into holidays values ('01-15-2001'::date,'Kings Birthday');

insert into holidays values ('02-19-2001'::date,'Presidents Day');


create function is_holiday (date) returns bool as 

'select 

    case 

        when $1=date then 1::bool 

        else 0::bool 

    end 

from holidays' language 'sql'


test=# select date,is_holiday(date) from holidays;

   

    date   | is_holiday 

-+

 2001-01-01 | t

 2001-01-15 | f

 2001-02-19 | f

(3 rows)





---

Jeffery S. Patterson

Meta-Lynx


e-mail: jpat@meta-lynx.com

Phone : 707-431-9320

  Fax : 707-433-2918


Meta-Lynx

132 Mill St. Suite 210

Healdsburg, CA 95448




===

This message contains information that may be confidential and

privileged.  Unless you are the addressee (or authorized to receive for

the addressee), you may not use, copy or disclose to anyone the message

or any information contained in the message.  If you have received the

message in error, please advise the sender by reply e-mail

jpat@meta-lynx.com, and delete the message.  Thank you very much.

===






[BUGS] BUG #2455: psql failing to restore a table because of a constaint violation.

2006-05-26 Thread Jeff Ross

The following bug has been logged online:

Bug reference:  2455
Logged by:  Jeff Ross
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1.4
Operating system:   OpenBSD 3.9 -current
Description:psql failing to restore a table because of a constaint
violation.
Details: 

After the upgrade to 8.1.4, this script I use to sync a development database
with our live database began failing.

Here's the script:

#!/bin/sh
#backup script for postgresql databases
#
DATE=`date +%Y%m%d`
#dump the live wykids database
/usr/local/bin/pg_dumpall -p 5432 -c > \
/home/_postgresql/wykids$DATE.sql
#drop the development wykids database
/usr/local/bin/dropdb -p 5435 wykids
#recreate the development wykids database from the dump file we just made
/usr/local/bin/psql -p 5435 template1 -f \
/home/_postgresql/wykids$DATE.sql

Here's the failure:

psql:/home/_postgresql/wykids20060524.sql:84507: ERROR:  new row for
relation "Clearinghouse" violates check
constraint "refnumber_ck"

CONTEXT:  COPY Clearinghouse, line 1: "Video Three R's for Special Education
School Age Uniqueness and
Cultural Awareness 0.5 total 49.9500..."

Here's the record it barfs on:

wykids=# select * from "Clearinghouse" where "Training Material"
ilike('%three r%');
-[ RECORD 1 ]-+--
Type  | Video
Training Material | Three R's for Special Education
Category  | School Age
Section Found In  | Uniqueness and Cultural Awareness
Clock Hours   | 0.5
Notes | total
Price | 49.95
# books   | 1
RefNumber | V207.030

Here's the table structure:

wykids=# \d "Clearinghouse"
 Table "public.Clearinghouse"
  Column   | Type  | Modifiers
---+---+---
 Type  | character varying(50) |
 Training Material | character varying(75) |
 Category  | character varying(50) |
 Section Found In  | character varying(50) |
 Clock Hours   | real  |
 Notes | character varying(50) |
 Price | double precision  |
 # books   | character varying(10) |
 RefNumber | character varying(30) | not null
Indexes:
"clearinghouse_old_pk" PRIMARY KEY, btree ("RefNumber")
Check constraints:
"refnumber_ck" CHECK ("RefNumber"::text ~ 
similar_escape('[A-Z]|[0-9]|.'::text, NULL::text))
Rules:
refnumber_uppercase_ins AS
ON INSERT TO "Clearinghouse" DO  UPDATE "Clearinghouse" SET 
"RefNumber" = upper(new."RefNumber"::text)
  WHERE "Clearinghouse"."RefNumber"::text = new."RefNumber"::text

The value in the record cited doesn't violate the constraint, and removing
that record from the .sql file
caused the same failure on the very next record.

Using pg_dump -Fc instead also failed.

As a workaround, we dropped the constraint (not critical) to make sure we
still had backup capability.

Jeff Ross

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [BUGS] BUG #2634: path requires double \\ for WAL

2006-09-18 Thread Jeff Davis
On Mon, 2006-09-18 at 13:37 -0400, Bruce Momjian wrote:
> Yes, this is required because we use backslashes for escapes.  The
> backslashing can be turned off in 8.2.
> 

So the postgresql.conf setting "standard_conforming strings" that turns
off backslash escaping affects the interpretation of other
postgresql.conf settings?

What if you have a path set for archive_command and then you change the
standard_conforming_strings GUC and SIGHUP the server? Will the path be
incorrect?

Regards,
Jeff Davis


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[BUGS] BUG #2678: Create or replace function with OUT args

2006-10-06 Thread Jeff Trout

The following bug has been logged online:

Bug reference:  2678
Logged by:  Jeff Trout
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1.4
Operating system:   OSX 10.4.8 (Also occurs on FC4 w/kernel 2.6.16
Description:Create or replace function with OUT args
Details:

it seems there may be some bug with cache coherency when replacing  
functions
that use OUT arguments.  This happens on Linux (2.6.16 ) - same  
version of
PG. (I didn't test across the pg restart as others are using that  
system).
I've created and replaced hundreds of "normal" functions. so I think  
it has

to do with OUT params.

Here's a transcript:

skittlebrau:/tmp postgres$ cat broken2.sql
create or replace function cachebroken(p_id int, out o_val1 int, out  
o_val2

int)
as $$
BEGIN

o_val1 := 12;
o_val2 := 45;

END
$$
language 'plpgsql';
skittlebrau:/tmp postgres$ psql indie
Welcome to psql 8.1.4, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

indie=# \i broken2.sql
CREATE FUNCTION
indie=# \i broken2.sql
CREATE FUNCTION
indie=# \i broken2.sql
CREATE FUNCTION
indie=# \i broken2.sql
CREATE FUNCTION
indie=# \q
skittlebrau:/tmp postgres$ psql indie
Welcome to psql 8.1.4, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

indie=# \i broken2.sql
psql:broken2.sql:10: ERROR:  missing cache data for cache id 27
indie=# \i broken2.sql
psql:broken2.sql:10: ERROR:  missing cache data for cache id 27
indie=# \q
skittlebrau:/tmp postgres$ pg_ctl -D /usr/local/pgsql/data/ -m fast  
restart

waiting for postmaster to shut downLOG:  logger shutting down
 done
postmaster stopped
postmaster starting
skittlebrau:/tmp postgres$ psql indie
Welcome to psql 8.1.4, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

indie=# \i broken2.sql
psql:broken2.sql:10: ERROR:  missing cache data for cache id 27
indie=# drop function cachebroken (int);
DROP FUNCTION
indie=# \i broken2.sql
CREATE FUNCTION
indie=# \q

Occurs on: indie=# select version();

version


 


--
 PostgreSQL 8.1.4 on powerpc-apple-darwin8.8.0, compiled by GCC
powerpc-apple-darwin8-gcc-4.0.0 (GCC) 4.0.0 (Apple Computer, Inc. build
5026)
(1 row)

indie=# select version();
  version

 


---
 PostgreSQL 8.1.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.1
20050727 (Red Hat 4.0.1-5)
(1 row)



a couple of the guys on irc have confirmed it on HEAD as well.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.dellsmartexitin.com/
http://www.stuarthamm.net/




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[BUGS] BUG #2678: Create or replace function with OUT args

2006-10-06 Thread Jeff Trout

The following bug has been logged online:

Bug reference:  2678
Logged by:  Jeff Trout
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1.4
Operating system:   OSX 10.4.8 (Also occurs on FC4 w/kernel 2.6.16
Description:Create or replace function with OUT args
Details: 

it seems there may be some bug with cache coherency when replacing functions
that use OUT arguments.  This happens on Linux (2.6.16 ) - same version of
PG. (I didn't test across the pg restart as others are using that system). 
I've created and replaced hundreds of "normal" functions. so I think it has
to do with OUT params.

Here's a transcript:

skittlebrau:/tmp postgres$ cat broken2.sql 
create or replace function cachebroken(p_id int, out o_val1 int, out o_val2
int)
as $$
BEGIN

o_val1 := 12;
o_val2 := 45;

END
$$
language 'plpgsql';
skittlebrau:/tmp postgres$ psql indie
Welcome to psql 8.1.4, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

indie=# \i broken2.sql  
CREATE FUNCTION
indie=# \i broken2.sql 
CREATE FUNCTION
indie=# \i broken2.sql 
CREATE FUNCTION
indie=# \i broken2.sql 
CREATE FUNCTION
indie=# \q
skittlebrau:/tmp postgres$ psql indie
Welcome to psql 8.1.4, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

indie=# \i broken2.sql 
psql:broken2.sql:10: ERROR:  missing cache data for cache id 27
indie=# \i broken2.sql 
psql:broken2.sql:10: ERROR:  missing cache data for cache id 27
indie=# \q
skittlebrau:/tmp postgres$ pg_ctl -D /usr/local/pgsql/data/ -m fast restart
waiting for postmaster to shut downLOG:  logger shutting down
 done
postmaster stopped
postmaster starting
skittlebrau:/tmp postgres$ psql indie
Welcome to psql 8.1.4, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

indie=# \i broken2.sql 
psql:broken2.sql:10: ERROR:  missing cache data for cache id 27
indie=# drop function cachebroken (int);
DROP FUNCTION
indie=# \i broken2.sql  
CREATE FUNCTION
indie=# \q

Occurs on: indie=# select version();
   version  
 

--
 PostgreSQL 8.1.4 on powerpc-apple-darwin8.8.0, compiled by GCC
powerpc-apple-darwin8-gcc-4.0.0 (GCC) 4.0.0 (Apple Computer, Inc. build
5026)
(1 row)

indie=# select version();
  version   
  

---
 PostgreSQL 8.1.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.1
20050727 (Red Hat 4.0.1-5)
(1 row)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [BUGS] Bug related to out of memory condition (more information)

2006-10-24 Thread Jeff Davis
I have made a clearer example of the bug I reported to -hackers
yesterday:

http://archives.postgresql.org/pgsql-hackers/2006-10/msg01252.php

The following example shows a simple case that fails on 8.0+ (including
CVS HEAD), but works fine on 7.4. There are two almost identical
situations, and one causes an ERROR and the other a PANIC. The only
difference is the column type: INT versus TEXT, respectively.

I am on FreeBSD. An OOM condition must be caused to see this bug. In
7.4, an OOM condition is not even caused for the query, so perhaps it
has a the same bug, but handles foreign keys differently. Incidently,
foreign keys are all AFTER triggers, even in 7.4, but I don't understand
why 7.4 doesn't exhaust itself of memory collecting the trigger events,
as is described in the following mailing list post:

http://archives.postgresql.org/pgsql-bugs/2006-05/msg00036.php

Also, and this is pure conjecture, this bug may be related to the
following change in the 8.0 release notes:
"Nondeferred AFTER triggers are now fired immediately after completion
of the triggering query, rather than upon finishing the current
interactive command. This makes a difference when the triggering query
occurred within a function: the trigger is invoked before the function
proceeds to its next operation. For example, if a function inserts a new
row into a table, any nondeferred foreign key checks occur before
proceeding with the function."

Regards,
Jeff Davis


Step 1: Create 4 tables
-
CREATE TABLE r1( i INT PRIMARY KEY );
INSERT INTO r1 VALUES(1);
CREATE TABLE r2( i INT PRIMARY KEY );
INSERT INTO r2 VALUES(1);
CREATE TABLE r3( i INT PRIMARY KEY );
INSERT INTO r3 VALUES(1);
CREATE TABLE r4( i INT PRIMARY KEY );
INSERT INTO r4 VALUES(1);


Step 2: Cause an out of memory condition. The result is an ERROR, as
expected.
-

BEGIN;

CREATE TABLE crashme (
  attr1INT REFERENCES r1(i),
  attr2INT REFERENCES r2(i),
  attr3INT REFERENCES r3(i),
  attr4INT REFERENCES r4(i),
  attr5TEXT
);

INSERT INTO crashme(attr1,attr2,attr3,attr4,attr5) SELECT 1,1,1,1,'t'
FROM generate_series(1,500);


Step 3: Do almost exacly the same thing, except attr5 is INT and not
TEXT type. This causes a PANIC instead of an ERROR. The bug is that this
should be only an ERROR, since everything is the same except the column
type for attr5.
---
BEGIN;

CREATE TABLE crashme (
  attr1INT REFERENCES r1(i),
  attr2INT REFERENCES r2(i),
  attr3INT REFERENCES r3(i),
  attr4INT REFERENCES r4(i),
  attr5INT
);

INSERT INTO crashme(attr1,attr2,attr3,attr4,attr5) SELECT 1,1,1,1,1 FROM
generate_series(1,500);





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[BUGS] Out of memory error causes Abort, Abort tries to allocate memory

2006-10-25 Thread Jeff Davis
I found the root cause of the bug I reported at:

http://archives.postgresql.org/pgsql-bugs/2006-10/msg00211.php

What happens is this:
* Out of memory condition causes an ERROR
* ERROR triggers an AbortTransaction()
* AbortTransaction() calls RecordTransactionAbort()
* RecordTransactionAbort calls smgrGetPendingDeletes()
* smgrGetPendingDeletes() calls palloc()
* palloc() fails, resulting in ERROR, causing infinite recursion
* elog.c detects infinite recursion, and elevates it to PANIC

I'm not sure how easy this is to fix, but I asked on IRC and got some
agreement that this is a bug.

It seems to me, in order to fix it, we would have to avoid allocating
memory on the AbortTransaction path. All smgrGetPendingDeletes() needs
to allocate is a few dozen bytes (depending on the number of relations
to be deleted). Perhaps it could allocate those bytes as list of pending
deletes fills up. Or maybe we can somehow avoid needing to record the
relnodes to be deleted in order for the abort to succeed.

I'm still not sure why foreign keys on large insert statements don't eat
memory on 7.4, but do on 8.0+.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [BUGS] Out of memory error causes Abort, Abort tries to

2006-10-25 Thread Jeff Davis
On Wed, 2006-10-25 at 16:20 -0300, Alvaro Herrera wrote:
> Jeff Davis wrote:
> > I found the root cause of the bug I reported at:
> > 
> > http://archives.postgresql.org/pgsql-bugs/2006-10/msg00211.php
> > 
> > What happens is this:
> > * Out of memory condition causes an ERROR
> > * ERROR triggers an AbortTransaction()
> > * AbortTransaction() calls RecordTransactionAbort()
> > * RecordTransactionAbort calls smgrGetPendingDeletes()
> > * smgrGetPendingDeletes() calls palloc()
> > * palloc() fails, resulting in ERROR, causing infinite recursion
> > * elog.c detects infinite recursion, and elevates it to PANIC
> > 
> > I'm not sure how easy this is to fix, but I asked on IRC and got some
> > agreement that this is a bug.
> 
> Hmm, maybe we could have AbortTransaction switch to ErrorContext, which
> has some preallocated space, before calling RecordTransactionAbort (or
> maybe have RecordTransactionAbort itself do it).
> 
> Problem is, what happens if ErrorContext is filled up by doing this?  At
> that point we will be severely fscked up, and you probably won't get the
> PANIC either.  (Maybe it doesn't happen in this particular case, but
> seems a real risk.)
> 

If we have a way to allocate memory and recover if it fails, perhaps
RecordTransactionAbort() could set the "rels to delete" part of the log
record to some special value that means "There might be relations to
delete, but I don't know which ones". Then, if necessary, it could
determine the relations that should be deleted at recovery time.

This idea assumes that we can figure out which relations are abandoned,
and also assumes that smgrGetPendingDeletes() is the only routine that
allocates memory on the path to abort a transaction due to an out of
memory error.

Regards,
Jeff Davis



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [BUGS] Out of memory error causes Abort, Abort tries to

2006-10-26 Thread Jeff Davis
On Wed, 2006-10-25 at 18:15 -0400, Tom Lane wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > Jeff Davis wrote:
> >> * smgrGetPendingDeletes() calls palloc()
> >> * palloc() fails, resulting in ERROR, causing infinite recursion
> 
> > Hmm, maybe we could have AbortTransaction switch to ErrorContext, which
> > has some preallocated space, before calling RecordTransactionAbort (or
> > maybe have RecordTransactionAbort itself do it).
> 
> Seems like it'd be smarter to try to free some memory before we push
> forward with transaction abort.  ErrorContext has only a limited amount
> of space ...
> 

In the particular case I'm referring to, it's the referential integrity
constraints using all the memory. Is that memory allocated in a
convenient context to free before the abort?

Glancing at the code, I think that it would work to MemoryContextReset()
the query's memory context, because the pending deletes (of the
relnodes) are allocated in TopMemoryContext. After the query's memory
context is reset, there should be plenty of space to finish the abort
within that context.

Is there any data in the query's memory context that needs to be saved
after we know we're aborting?

Regards,
Jeff Davis


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[BUGS] COPY fails on 8.1 with invalid byte sequences in text types

2006-10-27 Thread Jeff Davis
You can insert invalid UTF8 bytes sequences into a TEXT type on an 8.1
installation by doing something like:

INSERT INTO foo(t) VALUES('\xFF');

Then, you can do a:

COPY foo TO '/some/file';

but if you try to do a:

COPY foo FROM '/some/file';

That will fail because /some/file contains invalid UTF8 sequences, even
though it's the same file you copied out.

It seems to be essentially a data corruption issue if applications
insert binary data in text fields using escape sequences. Shouldn't
PostgreSQL reject an invalid UTF8 sequence in any text type?

Regards,
Jeff Davis


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [BUGS] COPY fails on 8.1 with invalid byte sequences in text

2006-10-27 Thread Jeff Davis
On Fri, 2006-10-27 at 14:42 -0700, Jeff Davis wrote:
> It seems to be essentially a data corruption issue if applications
> insert binary data in text fields using escape sequences. Shouldn't
> PostgreSQL reject an invalid UTF8 sequence in any text type?
> 

Another note: PostgreSQL rejects invalid UTF8 sequences in other
contexts. For instance, if you use PQexecParams() and insert using type
text and any format (text or binary), it will reject invalid sequences.
It will of course allow anything to be sent when the type is bytea.

Also, I thought I'd publish the workaround that I'm using.

I created a function that seems to work for validating text data as
being valid UTF8.

CREATE OR REPLACE FUNCTION valid_utf8(TEXT) returns BOOLEAN
LANGUAGE plperlu AS
$valid_utf8$
use utf8;
return utf8::decode($_[0]) ? 1 : 0;
$valid_utf8$;

I just add a check constraint on all of my text attributes in all of my
tables. Not fun, but it works.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [BUGS] COPY fails on 8.1 with invalid byte sequences in text

2006-10-31 Thread Jeff Davis
On Fri, 2006-10-27 at 14:42 -0700, Jeff Davis wrote:
> You can insert invalid UTF8 bytes sequences into a TEXT type on an 8.1
> installation by doing something like:
> 

I created a patch that appears to fix the problem, and does not appear
to break anything else.

Is this acceptable?

Regards,
    Jeff Davis
--- pgsql.orig/src/backend/parser/parse_type.c	Tue Oct 31 10:30:22 2006
+++ pgsql/src/backend/parser/parse_type.c	Tue Oct 31 11:05:11 2006
@@ -23,6 +23,7 @@
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
 #include "utils/syscache.h"
+#include "mb/pg_wchar.h"
 
 
 /*
@@ -367,6 +368,7 @@
 	Oid			typinput;
 	Oid			typioparam;
 
+	pg_verifymbstr(string,strlen(string),0);
 	typinput = ((Form_pg_type) GETSTRUCT(tp))->typinput;
 	typioparam = getTypeIOParam(tp);
 	return OidInputFunctionCall(typinput, string,

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [BUGS] COPY fails on 8.1 with invalid byte sequences in text

2006-10-31 Thread Jeff Davis
On Tue, 2006-10-31 at 16:13 -0500, Tom Lane wrote:
> Jeff Davis <[EMAIL PROTECTED]> writes:
> > I created a patch that appears to fix the problem, and does not appear
> > to break anything else.
> 
> ... except maybe bytea ...
> 

Ok. So then it seems that the only possible places to fix it are in
textin and all the other input functions for all the character types*,
or if we change COPY to use the send/recv functions rather than the
out/in functions.

I don't think we want to change the format for COPY, so is it reasonable
to change the input functions to reject invalid byte sequences?

COPY isn't just an issue for backups. Slony-I uses COPY to transfer
data, and if there are any invalid byte sequences than replication will
fail. The COPY doc page makes every implication that something COPY'd
out can be COPY'd back in in the same way.

Is this not a bug? Is there a discussion on -hackers about this that I
missed?

Regards,
Jeff Davis

* I don't immediately know what we'd do about "char". I think people
expect it to accept 256 values, but clearly that would violate a lot of
encodings. However, the current behavior creates the same problem with
COPY.


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [BUGS] COPY fails on 8.1 with invalid byte sequences in text

2006-11-03 Thread Jeff Davis
On Tue, 2006-10-31 at 23:18 -0500, Tom Lane wrote:
> Jeff Davis <[EMAIL PROTECTED]> writes:
> > Is this not a bug?
> 
> I don't actually see that it is.  The documentation is perfectly clear
> on the point:
> 
>   (It is your responsibility that the byte sequences you create
>   are valid characters in the server character set encoding.)
> 
> (This is in 4.1.2.1. String Constants)  If you don't want to deal with
> this, don't use octal escapes to construct multibyte characters.
> 

I have thought about this some more, and I still disagree on a few
points.

First, there is no way, as a DBA, to _not_ use octal escapes, because
there is no option to turn off. I can either inspect the application,
or just trust it, but I can't control it from the database (short of
adding a CHECK to every text column). Since it interferes with the
proper use of COPY, which in turn interferes with the use of pg_dump and
applications such as Slony, I think that it is something that should be
controlled at database.

Second, you pointed out that my patch breaks BYTEA. I'd like to point
out that (as I understand it) the patch only breaks BYTEA if you rely on
the cstring escaping to pass binary data to byteain, like:

=> SELECT E'\377'::bytea;

In my opinion, that's wrong anyway, because it will fail if you do
something like:

=> SELECT E'\377\000\377';

Because the NULL in the middle terminates the cstring, which passes a 1-
byte string to byteain, resulting in a 1-byte value, instead of a 3-byte
value which it should be.

However, if you pass ASCII data to byteain that represents binary data,
like:

=> SELECT E'\\377\\000\\377'::bytea;

It seems to work just fine, and that's what PQescapeByteaConn() does.
There may be a backwards-compatibility issue, but I don't think my patch
is 100% broken (unless, of course, you found some other way that it's
broken).

This is a practical concern because some applications use escaping that
protects against SQL injection, but does not protect against invalid
byte sequences. It's not obvious to an application programmer that
protecting against SQL injection is not enough, particularly if the
programmer is trying to be "database agnostic" or doesn't test against
different encodings. Ideally, they should use PQescapeStringConn() and
not convert anything to an octal escape, but that's not always the case.

Lastly, if we retain the current behavior, I think a note should be
added to the COPY docs. The current docs imply that if you COPY it out,
you can COPY it back in. Even if I read the above documentation note
directly after reading the COPY documentation, it would not make me
think that COPY could fail for built-in types.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[BUGS] 8.2 bug with outer join reordering

2006-12-06 Thread Jeff Davis

Thanks to rcohen on IRC yesterday for pointing this out and providing
his query + EXPLAIN output.

It looks like he still hasn't posted it to -bugs, and I was finally able
to reproduce it in a narrower, self contained test case, so I'm posting
this right now.

On 8.1 this returns 1 record. On 8.2 this returns 10. It appears to
be applying the filter too soon, and then it does an outer join which
violates the WHERE.

Regards,
    Jeff Davis

test=> SELECT version();
  version
-
 PostgreSQL 8.2.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.5
20051201 (Red Hat 3.4.5-2)
(1 row)

test=> CREATE TABLE t1 (a int, b int);
CREATE TABLE
test=> CREATE TABLE t2 (c int, d int);
CREATE TABLE
test=> CREATE TABLE t3 (e int, f int);
CREATE TABLE
test=> CREATE TABLE t4 (g int, h int);
CREATE TABLE
test=>
test=> INSERT INTO t1 SELECT generate_series, 1 from generate_series
(1,10);
INSERT 0 10
test=> COPY t2 FROM stdin DELIMITER ',';
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> 1,19
2,22
3,23
4,24
5,25
6,26
7,27
8,28
9,29
10,30
11,31
\.>> >> >> >> >> >> >> >> >> >> >>
test=> INSERT INTO t3 SELECT generate_series, 10 from generate_series
(1,1);
INSERT 0 1
test=> INSERT INTO t4 VALUES
test->   (19,4);
INSERT 0 1
test=>
test=> CREATE INDEX t3_e_idx ON t3 (e);
CREATE INDEX
test=> CREATE INDEX t4_g_idx ON t4 (g);
CREATE INDEX
test=> SELECT COUNT(*) FROM
test-> t1
test-> LEFT JOIN
test-> t2 ON ( t1.a = t2.c )
test-> LEFT JOIN
test-> t3 ON ( t2.d = t3.e )
test-> LEFT JOIN
test-> t4 ON ( t2.d = t4.g )
test-> WHERE ( t3.e = 19 OR t4.g = 19);
 count

 10
(1 row)

test=> EXPLAIN SELECT COUNT(*) FROM
test-> t1
test-> LEFT JOIN
test-> t2 ON ( t1.a = t2.c )
test-> LEFT JOIN
test-> t3 ON ( t2.d = t3.e )
test-> LEFT JOIN
test-> t4 ON ( t2.d = t4.g )
test-> WHERE ( t3.e = 19 OR t4.g = 19);
  QUERY PLAN
-
 Aggregate  (cost=831486.42..831486.43 rows=1 width=0)
   ->  Merge Left Join  (cost=22541.80..715990.92 rows=46198200 width=0)
 Merge Cond: (t1.a = t2.c)
 ->  Sort  (cost=10626.30..10864.44 rows=95254 width=4)
   Sort Key: t1.a
   ->  Seq Scan on t1  (cost=0.00..1443.54 rows=95254
width=4)
 ->  Sort  (cost=11915.49..12157.99 rows=97000 width=4)
   Sort Key: t2.c
   ->  Hash Left Join  (cost=136.35..2554.63 rows=97000
width=4)
 Hash Cond: (t2.d = t4.g)
 Filter: ((t3.e = 19) OR (t4.g = 19))
 ->  Merge Right Join  (cost=135.34..2061.34
rows=97000 width=12)
   Merge Cond: (t3.e = t2.d)
   ->  Index Scan using t3_e_idx on t3
(cost=0.00..446.00 rows=1 width=4)
   ->  Sort  (cost=135.34..140.19 rows=1940
width=8)
 Sort Key: t2.d
 ->  Seq Scan on t2  (cost=0.00..29.40
rows=1940 width=8)
 ->  Hash  (cost=1.01..1.01 rows=1 width=4)
   ->  Seq Scan on t4  (cost=0.00..1.01 rows=1
width=4)
(19 rows)

test=>



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[BUGS] Function returns wrong data after datatype change

2007-01-24 Thread Jeff Trout
I just ran across this, and I do not think it is entirely a PG bug or  
even something that the backend can detect and handle.
The problem stems from swapping a table definition from under a  
function.   I had a rather large table that had a number of double  
precision (dp) fields, and in a battle to make it smaller, thus fit  
more in ram, I changed it to float4 (real).  I did not do it with  
alter table .. type .. I made a new table, insert into newtbl select  
* from oldtbl; then switched the names.   When trying to induce this  
error if I reloaded the function I use to induce it PG does complain  
about a datatype mismatch.


However, one thing that happens is you can successfully pg_dump the  
new db (with the altered table) and load it and that function will  
not complain.


Here's a self contained example.

createdb broken1
psql broken1

create table brokendp
(
   cik int,
   trade_date timestamp,
   open_price double precision,
   high_price double precision,
   low_price double precision,
   close_price double precision,
   volume bigint,
   id int
);

insert into brokendp values (803016, '19940103', 0, 9.375, 9.375,  
9.375, 200, 9644195);
insert into brokendp values (12345, '19950101', 1.12, 2.23, 3.34,  
4.45, 1000, 1234567);


create or replace function getBrokenDP(int)
returns double precision
as $$
   select close_price
  from brokendp
  where cik = $1
  order by trade_date asc
  limit 1
$$
language 'sql';

select '803', getbrokendp(803016);
select '123', getbrokendp(12345);

create table newbrokendp
(
   cik int,
   trade_date timestamp,
   open_price real,
   high_price real,
   low_price real,
   close_price real,
   volume bigint,
   id int
);

--
-- I do not htink there is anything we can do about
-- this from a PG perspective.
--
insert into newbrokendp select * from brokendp;
alter table brokendp rename to oldbrokendp;
alter table newbrokendp rename to brokendp;

select 'switch';

select '803', getbrokendp(803016);
select '123', getbrokendp(12345);

commit;
\q

pg_dump broken1 > broken1.sql
createdb broken2
psql -f broken1.sql broken2


You'll see the numbers go radically different
(ie 9.375 changing to 5.39500333695425e-315)

and when you restore the backup, the getBrokenDP function will not  
make a datatype complaint, so this error will go on for a long time  
before it creeps up somewhere.


--
Jeff Trout <[EMAIL PROTECTED]>
http://www.dellsmartexitin.com/
http://www.stuarthamm.net/




---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [BUGS] Function returns wrong data after datatype change

2007-01-24 Thread Jeff Trout


On Jan 24, 2007, at 12:24 PM, Tom Lane wrote:


Jeff Trout <[EMAIL PROTECTED]> writes:

I just ran across this, and I do not think it is entirely a PG bug or
even something that the backend can detect and handle.
The problem stems from swapping a table definition from under a
function.


Hmm.  This should yield an error (SQL function not returning the type
it claims to), and we probably should plug the hole by invoking
check_sql_fn_retval every time not just at creation.

I thought you were about to complain about plpgsql, which has much  
worse

problems due to plan caching...


The really curious thing is that it does't complain when restoring  
from the dump - or are those error supressed?

--
Jeff Trout <[EMAIL PROTECTED]>
http://www.dellsmartexitin.com/
http://www.stuarthamm.net/




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[BUGS] Grantor name gets lost when grantor role dropped

2007-04-16 Thread Jeff Davis
I am sending this email on behalf of Russel Smith. He discovered this
bug and his description follows:

Verified on 8.2.3 on Fedora Core 6
Verified on 8.1.3 on RHEL4, custom compile. (I can't control the update to 
8.1.8)
 
The output of an empty role name would possibly not be a problem, but when you 
are 
doing a dump and restore, pg_dumpall dumps invalid syntax as below;
 
GRANT "postgres" TO "test_role" GRANTED BY "";
 
We either need to rethink the way we handle grantor information and when it's 
valid.
Or we need to at least allow dump/restore to work as expected when a dropped 
role
granted privileges to other users.
 
To add to my woes when investigating this, GRANTED BY syntax is not included in 
the
8.2 documentation at all.  It's not listed as valid syntax, and there are no
comments saying what it does.  
 
The self contained test case to produce this is below;
 
Regards
 
Russell Smith
 
psql postgres < bug.sql 2>&1 > output.txt
 
CREATE ROLE test_role
  NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE;
 
CREATE ROLE invalid_grantor
  SUPERUSER INHERIT NOCREATEDB NOCREATEROLE;
 
SET ROLE invalid_grantor;
GRANT "postgres" TO "test_role";
SET ROLE postgres;
 
select * from pg_roles;
 
select pg_auth_members.*, ur.rolname, gr.rolname from pg_auth_members LEFT JOIN 
pg_roles ur ON roleid = oid
LEFT JOIN pg_roles gr ON gr.oid = grantor;
 
DROP ROLE invalid_grantor;
 
select pg_auth_members.*, ur.rolname, gr.rolname from pg_auth_members LEFT JOIN 
pg_roles ur ON roleid = oid
LEFT JOIN pg_roles gr ON gr.oid = grantor;
 
DROP ROLE test_role;


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[BUGS] pg_dump doesn't properly honor -O for sequences

2007-04-25 Thread Jeff Davis

pg_dump -O apparently removes all instances of ALTER TABLE ... OWNER TO
from the output, but does not remove ALTER SEQUENCE ... OWNED BY from
the output.

Specifically this is with SERIAL sequences, which are dumped using the
ALTER SEQUENCE syntax (as of 8.2). Normal sequences are dumped using the
ALTER TABLE ... OWNER TO syntax, which properly honors -O.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [BUGS] not bug after all, sorry for the noise

2007-04-25 Thread Jeff Davis
On Wed, 2007-04-25 at 11:36 -0700, Jeff Davis wrote:
> pg_dump -O apparently removes all instances of ALTER TABLE ... OWNER TO
> from the output, but does not remove ALTER SEQUENCE ... OWNED BY from
> the output.
> 
> Specifically this is with SERIAL sequences, which are dumped using the
> ALTER SEQUENCE syntax (as of 8.2). Normal sequences are dumped using the
> ALTER TABLE ... OWNER TO syntax, which properly honors -O.
> 

Sorry for the noise. I misinterpreted the meaning of OWNED BY to mean
the user who owns, not the table who owns. 

Regards,
Jeff Davis


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [BUGS] pg_dump doesn't properly honor -O for sequences

2007-04-25 Thread Jeff Davis
On Wed, 2007-04-25 at 16:44 -0400, Tom Lane wrote:
> Jeff Davis <[EMAIL PROTECTED]> writes:
> > pg_dump -O apparently removes all instances of ALTER TABLE ... OWNER TO
> > from the output, but does not remove ALTER SEQUENCE ... OWNED BY from
> > the output.
> 
> Why should it?  That's not ownership in the same sense.
> 

I sent a retraction of the bug report quickly afterward. My apologies.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [BUGS] BUG #3320: Error when using INSERT...RETURNING as a subquery

2007-05-29 Thread Jeff Davis
On Tue, 2007-05-29 at 18:10 +0100, Gregory Stark wrote:
> It has the same problem that SELECT triggers have. How many rows should you
> expect that subquery to insert, update, or delete if it's used in a join
> clause? Or in the where clause of another insert/update/delete statement?
> 

We could handle it essentially like a volatile set-returning function.

It may be easy to shoot oneself in the foot, but that is true for many
uses of volatile functions.

If the argument is that we shouldn't make it any easier, that's a fair
point, but this is one possible definition.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [BUGS] BUG #3320: Error when using INSERT...RETURNING as a subquery

2007-05-30 Thread Jeff Davis
On Tue, 2007-05-29 at 22:41 -0400, Tom Lane wrote:
> Jeff Davis <[EMAIL PROTECTED]> writes:
> > On Tue, 2007-05-29 at 18:10 +0100, Gregory Stark wrote:
> >> It has the same problem that SELECT triggers have. How many rows should you
> >> expect that subquery to insert, update, or delete if it's used in a join
> >> clause? Or in the where clause of another insert/update/delete statement?
> 
> > We could handle it essentially like a volatile set-returning function.
> 
> Uh-huh.  Please provide a concise, accurate definition of what that
> does.  For extra points, be sure it describes the behavior of all recent
> Postgres versions.  (And after that, we could argue about whether we
> actually *like* the described behavior ... which I'll bet we won't.)
> 

I understand that we don't make many guarantees about when and how many
times volatile functions are executed (the most obvious example is the
WHERE clause).

I also understand the argument that we don't want to extend that
uncertainty to UPDATE ... RETURNING.

It is possible to define behavior though, because it's already done for
volatile functions. Even if it's not a good definition, and even if that
definition changes between versions and is non-deterministic, it seems
like it offers some kind of starting place.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 6: explain analyze is your friend


[BUGS] tsvector that can't be dumped/reloaded in 8.3beta

2007-11-02 Thread Jeff Davis
I was unable to dump from beta1 to beta2, and I narrowed it down to this
simpler case:

test=> select tsvectorin(tsvectorout(
to_tsvector($foo$ a.b.cd/x=mnop.q\  $foo$)
));
ERROR:  syntax error in tsvector: "'a.b.cd':2 'a.b.cd/x':1 '/x=mnop.q
\':3"

This has the same result on beta1 and beta2. 

I'm using en_US.UTF-8 on FreeBSD.

Regards,
Jeff Davis






---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[BUGS] ctrl \ makes psql 8.2.5 dump core

2008-01-03 Thread Jeff Ross

Hi all,

I found out this morning that entering CTRL \ at the psql prompt will 
make psql dump core. The is reproducible on the two machines I have 
here.  Another server running downtown just quits without dumping core.

All are running 8.2.5 on OpenBSD.

I thought I was in the editor, and that key combination invokes nano's 
search and replace function.


[EMAIL PROTECTED]:/var/www/wykids/training-calendar $ psql --version
psql (PostgreSQL) 8.2.5
contains support for command-line editing
[EMAIL PROTECTED]:/var/www/wykids/training-calendar $ psql wykids
Welcome to psql 8.2.5, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

[EMAIL PROTECTED] localhost# Quit (core dumped)

The core file is less than a meg in size, so I can either upload it to 
our web site or e-mail it directly to someone if anyone is interested.


Thanks,

Jeff Ross

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [BUGS] ctrl \ makes psql 8.2.5 dump core

2008-01-03 Thread Jeff Ross

Shelby Cain wrote:

I believe this is expected behavior the Ctrl-\ keystroke will cause a SIGQUIT 
to the current process.  Any program that doesn't explicitly handle SIGQUIT 
will abort.

Regards,

Shelby Cain


Okay, thanks for the reply, and sorry for the noise.

Jeff

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [BUGS] BUG #3855: backend sends corrupted data on EHOSTDOWN error

2008-01-08 Thread Jeff Davis
On Tue, 2008-01-08 at 01:50 +, Scot Loach wrote:
> The following bug has been logged online:
> 
> Bug reference:  3855
> Logged by:  Scot Loach
> Email address:  [EMAIL PROTECTED]
> PostgreSQL version: 8.2.4
> Operating system:   freebsd 6.1
> Description:backend sends corrupted data on EHOSTDOWN error
> Details: 
> 

This is a FreeBSD bug. 

http://www.freebsd.org/cgi/query-pr.cgi?pr=100172

It has been fixed here:

http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/netinet/tcp_output.c

in revision 1.112.2.1.

I ran into this bug too, and it was very frustrating! For me, it
manifested itself as SSL errors.

You can demonstrate the problem with SSH as well (inducing an ARP
failure will terminate the SSH session, when TCP should protect you
against that), so it is clearly not a PostgreSQL bug.

Thanks to "Andrew - Supernews" (a PostgreSQL user) for tracking this bug
down.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [BUGS] BUG #3855: backend sends corrupted data on EHOSTDOWNerror

2008-01-08 Thread Jeff Davis
On Tue, 2008-01-08 at 12:57 -0500, Scot Loach wrote:
> This may be true, but I still think PostgreSQL should be more defensive
> and actively terminate the connection when this happens (like ssh does)

I think postgresql's behavior is well within reason. Let me explain:

What is happening is that FreeBSD *actually sends the data* before
returning EHOSTDOWN as an error, and leaving the TCP connection open! At
the time I was tracking this problem down, I wrote a C program to
demonstrate that fact. This is the core of the reason why it's a
protocol violation in PostgreSQL (or SSL error) rather than a
disconnection.

I think PostgreSQL is making the assumption here that an unrecognized
error code from send() that leaves the connection in a good state, is a
temporary error that may be resolved. Thus, PostgreSQL assumes that due
to the error, no data was written, and re-sends the data, succeeding
this time. I reason that the openssl library makes similar assumptions
(i.e. assuming an error means the data wasn't sent, and resets some
internal SSL protocol state), otherwise I wouldn't get SSL errors
afterward, but it would manifest itself as a PostgreSQL protocol
violation regardless of whether you're using SSL or not.

If the OS leaves a TCP connection open, I think it is perfectly
reasonable for an application to assume that the OS has sent exactly as
many bytes as it said it sent; no more, no less.

I would lean toward the opinion that postgresql works just fine now, and
that TCP is explicitly designed to prevent these kinds of problems, and
we only see this problem because FreeBSD 6.1 TCP is broken.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [BUGS] BUG #3855: backend sends corrupted data onEHOSTDOWNerror

2008-01-08 Thread Jeff Davis
On Tue, 2008-01-08 at 14:06 -0500, Scot Loach wrote:
> I agree this would be fine if PostgreSQL works the way you say below.
> 
> However, PostgreSQL does not look at the # of bytes written and continue
> sending after that many bytes.  PostgreSQL actually simply clears its
> buffer of bytes to send on this error, in this code:
> 
> pqcomm.c:1075
> /*
>  * We drop the buffered data anyway so that processing can
>  * continue, even though we'll probably quit soon.
>  */
> PqSendPointer = 0;
> return EOF;
> 
> 
> The result as I saw on a system where this was occurring, was that when
> PostgreSQL was sending back a large result set, there was simply a
> fragment of it missing.

I think I see what you are saying. I was thinking about fe-misc.c, where
it explicitly says (in the default case of a switch statement of the
return value):

/* We don't assume it's a fatal error... */
conn->outCount = 0;
return -1;

(but that's on the frontend, obviously)

I think the problem you're talking about comes from the callers of
pq_putmessage, which simply ignore any return value at all (and thus do
not retransmit the message). I agree that is a problem (assuming I
understand what's going on).

Regards,
Jeff Davis


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [BUGS] BUG #3979: SELECT DISTINCT slow even on indexed column

2008-02-21 Thread Jeff Davis
On Thu, 2008-02-21 at 23:34 +, David Lee wrote:
> Finally, I ran:
>  SELECT a, b FROM x GROUP BY a, b;
> 
> But it was still the same.
> 
> Next I created an index on ("a") and ran the query:
>  SELECT DISTINCT a FROM x
> 
> but the same thing happened (first didn't use the index; after turning
> seq-scan off, was still slow; tried using GROUP BY, still slow).
> 
> The columns "a" and "b" are NOT NULL and has 100 distinct values each. The
> indexes are all btree indexes.

If there are only 100 distinct values each, then that's only (at most)
10k distinct (a,b) pairs.

To me it sounds like it would be most efficient to use a HashAggregate,
which can only be used by the "GROUP BY" variant of the query you ran
(DISTINCT can't use that plan).

First, try to force a HashAggregate and see what the results are. If
that is faster, the planner is not choosing the right plan. Try ANALYZE
to update the statistics, and if that doesn't work, post EXPLAIN
results.

Also, this post is somewhat off-topic for -bugs, try posting to -general
or -performance with this type of question.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 6: explain analyze is your friend


[BUGS] BUG #4085: No implicit cast after coalesce

2008-04-02 Thread Jeff Dwyer

The following bug has been logged online:

Bug reference:  4085
Logged by:  Jeff Dwyer
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.3.1
Operating system:   Mac OS X
Description:No implicit cast after coalesce
Details: 

This works fine:
select 1 where current_date between '1900-3-3' and '1900-2-2';
This doesn't:
select 1 where current_date between coalesce(null,current_date)  and
coalesce(null, '1900-1-2'); 

This fix works:
select 1 where current_date between coalesce(null,current_date)  and
coalesce(null, date('1900-1-2')); 


This seems like a bug to me. Why should an explicit cast be necessary after
a coalesce? This broke code that worked in 8.1.

Thanks,
-Jeff

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #4085: No implicit cast after coalesce

2008-04-03 Thread Jeff Dwyer
OK, worksforme. I guess I still find it odd, but I much prefer  
explicitness & robustness to small values of 'work'.


Thanks for the prompt response.

-Jeff

On Apr 2, 2008, at 7:15 PM, Tom Lane wrote:


Peter Eisentraut <[EMAIL PROTECTED]> writes:

Jeff Dwyer wrote:
This seems like a bug to me. Why should an explicit cast be  
necessary after

a coalesce?


Because coalesce(null, '1900-1-2') has no other type information  
attached, so
it would have picked text by default as result type, and that then  
clashes
with the result type of coalesce(null,current_date), which can be  
derived to
be date.  This is a robustness improvement: 8.2 and earlier would  
silently

accept coalesce(null, 'abc') and apply text-semantics comparison.


Yes.  The query "worked" in pre-8.3 only for rather small values of
"work": if you had been using a non-ISO datestyle the comparisons  
would
in fact have come out wrong.  Also, it being a textual rather than  
date
comparison, any index on the date column being compared to wouldn't  
have

been used.

regards, tom lane



--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] views are not auto completed on psql

2002-05-27 Thread Jeff Post

The best way that I can think to describe this is that views are not added
as candidates of possible line completions in psql.
So if I have a view with the name org_details and a table with the name
organizations and I try select * from org the completion is
organizations not a list of organizations and org_details.


Thanks,
Jeff Post
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[BUGS] VACUUM ANALYZE differs under 7.2.3 from 7.2.1

2002-10-28 Thread Jeff Boes
The "VACUUM ANALYZE tablename" command does not have the same effect on 
table metadata under 7.2.1 and 7.2.3.  In particular, I've noted that 
pg_class.reltuples is not updated by vacuuming after a delete.

Here is a sequence of SQL commands to demonstrate the difference.  Under 
7.2.1, the resulting last three displays of the pg_class.reltuples value 
will be zero (0), while under 7.2.3 it will be 10.

create table foo (a char(1));

copy foo from stdin;

a

a

a

a

a

a

a

a

a

a

\.

\x

\t

select count(*) from foo;

select reltuples from pg_class where relname = 'foo';

vacuum foo;

select reltuples from pg_class where relname = 'foo';

vacuum analyze foo;

select reltuples from pg_class where relname = 'foo';

analyze foo;

select reltuples from pg_class where relname = 'foo';

delete from foo;

select reltuples from pg_class where relname = 'foo';

select count(*) from foo;

select reltuples from pg_class where relname = 'foo';

vacuum foo;

select reltuples from pg_class where relname = 'foo';

vacuum analyze foo;

select reltuples from pg_class where relname = 'foo';

analyze foo;

select reltuples from pg_class where relname = 'foo';

drop table foo;




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


[BUGS] pg_autovacuum bug with temp tables?

2004-02-17 Thread Jeff Boes
[Apologies to anyone who is seeing this for the second time; the author 
of pg_autovacuum suggested it belonged here, so I am reposting.]

Recently I installed and started pg_autovacuum against my new Pg 7.4.1
installation. We use a fairly large number of temporary tables within an
application (that is, several copies of this application may be running,
and each creates and drops several temp tables as they cycle through
their workload). Here's what I think happened, based on the log
(pg_autovacuum's and the postmaster's):
pg_autovacuum.log:
[2004-02-15 08:10:01 AM] Performing: ANALYZE "pg_temp_13"."tmp_targs"
[2004-02-15 08:10:01 AM] Can not refresh statistics information from the
database nexcerpt.
[2004-02-15 08:10:01 AM] The error is [ERROR:  relation
"pg_temp_13.tmp_targs" does not exist
postmaster.log:
2004-02-15 08:10:01 [31563] ERROR:  relation "pg_temp_13.tmp_targs" does
not exist
2004-02-15 08:10:01 [31563] LOG:  unexpected EOF on client connection
It appears that pg_autovacuum collected the name of a temp table, and
later tried to analyze it. The table was gone by then, and this caused
the daemon to exit. As this happened on a Sunday morning, my weekend
experiment to see how pg_autovacuum would maintain our test database was
rather spoiled ... 8-(
--
Jeff Boes  vox 269.226.9550 ext 24
Database Engineer fax 269.349.9076
Nexcerpt, Inc. http://www.nexcerpt.com
  ...Nexcerpt... Extend your Expertise


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[BUGS] backend crash with certain statements/tables

2004-03-02 Thread Jeff Bohmer
Reproduced in PG 7.4.1 on ...

OS X Server 10.2.8
OS X 10.2.8
RHL 9 (2.4.20-30.9)
Running any of these statements on my database causes the backend to 
crash (example from PG log below):

create temp table foo as select * from server_prefs limit 1;
create table foo as select * from server_prefs limit 1;
create temp table foo as select * from agency_dbs limit 1;
create table foo as select * from agency_dbs limit 1;
All of these statements produce expected results (no crash):

create temp table foo as select * from c_group limit 1;
create table foo as select * from c_group limit 1;
create table foo as select * from server_prefs;
create table foo as select * from agency_dbs;
select * from server_prefs limit 1;
select * from agency_dbs limit 1;
Attached is a file that creates a database and only the server_prefs 
table and then reproduces the crash for me on both OS X and Linux 
when I run it like:

	psql -d template1 -U postgres -f pg_crash

Thanks,
- Jeff


2004-03-01 10:32:24 [471] LOG:  connection received: host=[local] port=
2004-03-01 10:32:24 [471] LOG:  connection authorized: user=username 
database=cos
2004-03-01 10:32:34 [471] LOG:  statement: create temp table foo as 
select * from server_prefs limit 1;
2004-03-01 10:32:35 [19913] LOG:  server process (PID 471) was 
terminated by signal 10
2004-03-01 10:32:35 [19913] LOG:  terminating any other active server processes
2004-03-01 10:32:35 [429] WARNING:  terminating connection because of 
crash of another server process
DETAIL:  The postmaster has commanded this server process to roll 
back the current transaction and exit, because another server process 
exited abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database 
and repeat your command.
2004-03-01 10:32:35 [474] LOG:  connection received: host=[local] port=
2004-03-01 10:32:35 [474] FATAL:  the database system is in recovery mode
2004-03-01 10:32:35 [19913] LOG:  all server processes terminated; 
reinitializing
2004-03-01 10:32:35 [475] LOG:  database system was interrupted at 
2004-03-01 10:27:58 MST
2004-03-01 10:32:35 [475] LOG:  checkpoint record is at 5/1350BF6C
2004-03-01 10:32:35 [475] LOG:  redo record is at 5/1350BF6C; undo 
record is at 0/0; shutdown TRUE
2004-03-01 10:32:35 [475] LOG:  next transaction ID: 374025; next OID: 46414656
2004-03-01 10:32:35 [475] LOG:  database system was not properly shut 
down; automatic recovery in progress
2004-03-01 10:32:35 [475] LOG:  redo starts at 5/1350BFAC
2004-03-01 10:32:35 [475] LOG:  unexpected pageaddr 5/957 in log 
file 5, segment 19, offset 5701632
2004-03-01 10:32:35 [475] LOG:  redo done at 5/1356EB8C
2004-03-01 10:32:38 [475] LOG:  database system is ready




pg_crash.gz
Description: Mac BinHex archive
--

Jeff Bohmer
VisionLink, Inc.
_
303.402.0170
www.visionlink.org
_
People. Tools. Change. Community.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [BUGS] backend crash with certain statements/tables

2004-03-02 Thread Jeff Bohmer
I should have a patch later today.
Great!  I'd like to try out the patch when it's ready.

Thanks,
- Jeff
--
Jeff Bohmer
VisionLink, Inc.
_
303.402.0170
www.visionlink.org
_
People. Tools. Change. Community.
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [BUGS] backend crash with certain statements/tables

2004-03-02 Thread Jeff Bohmer
Jeff Bohmer <[EMAIL PROTECTED]> writes:
 Great!  I'd like to try out the patch when it's ready.
Here ya go.


Works for me on OS X and Linux.

Thank you very much!

- Jeff
--
Jeff Bohmer
VisionLink, Inc.
_
303.402.0170
www.visionlink.org
_
People. Tools. Change. Community.
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[BUGS] 8.0.0 gmake check fails if on disk, passes on ram disk....

2005-01-23 Thread Jeff Ross
Hi,
If I put the source for 8.0.0 on disk (RAID 1) , configure, compile and 
run gmake check, it fails with 33 errors, 30 of which are

! psql: could not send startup packet: Broken pipe
If I put the same source code up on a ram disk, configure and compile it 
the same way, all 96 tests pass.

OS is OpenBSD 3.6.
I'm currently running 7.3.5, which I'd love to upgrade but I'm a little 
leary until I can determine what is causing this error.

Any thoughts greatly appreciated!
Jeff Ross
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[BUGS] 8.0.0 on disk fails regress test, but on ram disk it is fine...

2005-01-23 Thread Jeff Ross
I'm preparing to upgrade our PostgreSQL installation from 7.3.5 to 
8.0.0.  I downloaded, configured, compiled, and did gmake check and 
much to my surprise, 26 of the 96 tests failed.

Examination of regression.diffs shows that 1 of the failures is random 
(and it is ignored) but 23 of the 26 errors are:

! psql: could not send startup packet: Broken pipe
Interestingly enough, if I load the source code onto a 1GB ram disk and 
configure, compile and check it there, all 96 tests pass.

Hardware is RAID 1 using 2 disks with one additional disk as a hot 
spare, dual 2.66 GHz Xeon processors with 2 GB of RAM.  Operating 
system is OpenBSD 3.6.

I'm a little leary of upgrading my PostgreSQL installation with those 
kind of failure rates.

I'd appreciate any insight as to why so many tests would fail when run 
on disk and pass when run in ram.

Thanks,
Jeff Ross
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [BUGS] 8.0.0 gmake check fails if on disk, passes on ram disk....

2005-01-24 Thread Jeff Ross
Tom Lane wrote:
Michael Fuhr <[EMAIL PROTECTED]> writes:
On Fri, Jan 21, 2005 at 05:03:08PM -0700, Jeff Ross wrote:
If I put the same source code up on a ram disk, configure and compile it 
the same way, all 96 tests pass.

Interesting.  Is this behavior consistent?  What's different 'twixt
the RAID disk and the RAM disk?

If the problem is at bottom a too low processes-per-user limit, as it
was for Jean-Gerard, then maybe the RAM-disk case passes because of
different timing details.  This theory is a bit of a stretch though.
In any case, we're being shown the wrong output.  What I want to know is
what appears in the postmaster log when these failures happen?
regards, tom lane
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match

I did a make clean, make, and gmake check this morning.  Fewer tests 
failed (16 of the 96) on the raid1, and again no tests failed on the ram 
disk.

Rather than post it in the e-mail, I've put the postmaster.log at
http://www.openvistas.net/postmaster.log
Thanks!
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [BUGS] 8.0.0 gmake check fails if on disk, passes on ram disk....

2005-01-24 Thread Jeff Ross
Tom Lane wrote:
Jeff Ross <[EMAIL PROTECTED]> writes:
Rather than post it in the e-mail, I've put the postmaster.log at
http://www.openvistas.net/postmaster.log

I see multiple occurrences of
LOG:  could not fork new process for connection: Resource temporarily 
unavailable
so indeed your process limit is too low.  It's curious that OpenBSD
seems more prone than other platforms to produce a "broken pipe" error
instead of reporting the error message sent back by the postmaster ...
but there is something of a race condition there, so we can't complain
too much.
I have no idea why running the tests off ramdisk would make a difference
in the probability of this failure, but it's academic.  Raise your
process limit.
regards, tom lane

Thank you, Tom! That was indeed the problem.
Jeff Ross
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[BUGS] BUG #1938: pg_dump mis-intreprets "default now()";

2005-10-04 Thread Jeff MacDonald

The following bug has been logged online:

Bug reference:  1938
Logged by:  Jeff MacDonald
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.0.3
Operating system:   FreeBSD 5.2.1
Description:pg_dump mis-intreprets "default now()";
Details: 

Hi,

I did a backup from 7.3.2 using pg_dumpall.

When I did a restore all of my timestamps that were defaulted to now(); were
now defaulted to the time that I piped my dump back into postgres.

Meaning the now() was parsed instead of just being copied.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


  1   2   3   >