[BUGS] BUG #4395: internal account lookup faulure

2008-09-03 Thread Daniel

The following bug has been logged online:

Bug reference:  4395
Logged by:  Daniel
Email address:  [EMAIL PROTECTED]
PostgreSQL version: postgresql-8.3.
Operating system:   Windows Vista
Description:internal account lookup faulure
Details: 

what i have do?

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #1405: instalation

2005-01-17 Thread daniel

The following bug has been logged online:

Bug reference:  1405
Logged by:  daniel
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.0.0
Operating system:   windows xp professional
Description:instalation
Details: 

error on installation
do not install

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[BUGS] BUG #1460: unable dwonloads

2005-02-07 Thread Daniel

The following bug has been logged online:

Bug reference:  1460
Logged by:  Daniel
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.0.1
Operating system:   win 2000
Description:unable dwonloads
Details: 

An error has occurred processing the request...  
 

Unable to process FTP Request
ftp://ftp.cl.postgresql.org/ftp/pub/postgresql/win32/postgresql-8.0.1.zip
  
 Proxy v4.14 (Release)

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[BUGS] pgcrypto bug or my brain?

2010-12-03 Thread daniel
I have discovered either a bug in pgcrypto (postgresql-9.0.1) or at least an 
implementation change that is incompatible with how I've been using pgcrypto. 
I'm hoping a discussion here on bugs@ is an appropriate first course of 
discussion.

I have a couple of databases in which I have been using pgcrypto for about 10 
years (one relation has > 1.8 million records). I believe I started using 
pgcrypto with postgresql-7.2.x and have had absolutely no adverse issues with 
my data during regular upgrades through postgresql-8.4.5. I know that the raw 
encrypt() and decrypt() are no longer recommended when the pgp_*() can be used, 
but this is now a legacy issue since the pgp_*() functions did not even exist 
10 years ago. Please note that the pgp_*() functions do work fine in 
postgresql-9.0.1.

During testing of upgrade to 9.0.1 (I _love_ streaming replication!), my 
encrypted data gets mangled during import (psql -f ) 
and, in fact, I can't even use encrypt() or decrypt() on new data in my "usual 
way". Here's an example that works on 7.2.x through 8.4.5 but not 9.0.1 
(additional platform details are below):

--
-- Pull in pgcrypto functions:
--
\i /usr/local/pgsql/share/contrib/pgcrypto.sql


--
-- Create a test table:
--
create table cryptest (
  id serial not null primary key,
  plaint character varying not null,
  enct bytea
);


--
-- Insert some data:
--
insert into cryptest (plaint, enct) values
  ('Testing blowfish...', encrypt('Testing blowfish...',
E'I know this is not a proper key but it _should_ work', 'bf'));


--
-- Fetch the data:
--
select
  id,
  plaint,
  decrypt(enct, E'I know this is not a proper key but it _should_ 
work', 'bf')
from
  cryptest;


Platform:
CentOS-5.5 (fully up to date with 'yum update') both i386 and x86_64
Postgresql configured with './configure --with-openssl'


I'll be happy to provide any additional information necessary and do any sort 
of testing (if it should prove to be necessary) though my skills in this are 
somewhat limited.

Thanks,

Daniel


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] pgcrypto bug or my brain?

2010-12-03 Thread daniel
On Dec 3, 2010, at 1:49 PM, Heikki Linnakangas wrote:

> On 03.12.2010 19:48, daniel wrote:
>> I have discovered either a bug in pgcrypto (postgresql-9.0.1) or at least an 
>> implementation change that is incompatible with how I've been using 
>> pgcrypto. I'm hoping a discussion here on bugs@ is an appropriate first 
>> course of discussion.
>> 
>> I have a couple of databases in which I have been using pgcrypto for about 
>> 10 years (one relation has>  1.8 million records). I believe I started using 
>> pgcrypto with postgresql-7.2.x and have had absolutely no adverse issues 
>> with my data during regular upgrades through postgresql-8.4.5. I know that 
>> the raw encrypt() and decrypt() are no longer recommended when the pgp_*() 
>> can be used, but this is now a legacy issue since the pgp_*() functions did 
>> not even exist 10 years ago. Please note that the pgp_*() functions do work 
>> fine in postgresql-9.0.1.
>> 
>> During testing of upgrade to 9.0.1 (I _love_ streaming replication!), my 
>> encrypted data gets mangled during import (psql 
>> -f) and, in fact, I can't even use encrypt() or 
>> decrypt() on new data in my "usual way". Here's an example that works on 
>> 7.2.x through 8.4.5 but not 9.0.1 (additional platform details are below):
>> 
>> --
>> -- Pull in pgcrypto functions:
>> --
>> \i /usr/local/pgsql/share/contrib/pgcrypto.sql
>> 
>> 
>> --
>> -- Create a test table:
>> --
>> create table cryptest (
>>   id serial not null primary key,
>>   plaint character varying not null,
>>   enct bytea
>> );
>> 
>> 
>> --
>> -- Insert some data:
>> --
>> insert into cryptest (plaint, enct) values
>>   ('Testing blowfish...', encrypt('Testing blowfish...',
>> E'I know this is not a proper key but it _should_ work', 
>> 'bf'));
>> 
>> 
>> --
>> -- Fetch the data:
>> --
>> select
>>   id,
>>   plaint,
>>   decrypt(enct, E'I know this is not a proper key but it _should_ 
>> work', 'bf')
>> from
>>   cryptest;
>> 
>> 
>> Platform:
>> CentOS-5.5 (fully up to date with 'yum update') both i386 and x86_64
>> Postgresql configured with './configure --with-openssl'
>> 
>> 
>> I'll be happy to provide any additional information necessary and do any 
>> sort of testing (if it should prove to be necessary) though my skills in 
>> this are somewhat limited.
> 
> decrypt() returns a bytea, and the default representation of bytea was 
> changed in 9.0. The result is the same but it's just displayed differently. 
> Try "set bytea_output TO 'escape'" to get the old familiar output.
> 
> The proper way to do that is to use convert_to/from to convert from text to 
> bytea before encrypting, and from bytea to text after decrypting.
> 
> -- 
>  Heikki Linnakangas
>  EnterpriseDB   http://www.enterprisedb.com
> 
> -- 
> Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-bugs


Heikki,

I figured (and hoped) that it would turn out to be something like that--I guess 
I just didn't dig deep enough to find the answer my self. I need to read the 
release notes more thoroughly.

Thanks much for your help!

Daniel


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #7497: Excessive toast bloat after standby promotion

2012-08-15 Thread daniel
The following bug has been logged on the website:

Bug reference:  7497
Logged by:  Daniel Farina
Email address:  dan...@heroku.com
PostgreSQL version: 9.0.7
Operating system:   Ubuntu 10.04
Description:

We have an interesting case of an application that is non-trivial but not
incredibly busy that has bloated up to over 800GB.  Our charts indicate that
this bloat starts right at the time of a promotion, even though there is an
absence of code changes for quite some time both before and after.  This
application had been running for many months in a constant amount of space
-- roughly 300GB -- prior to this.

There is only about 10GB of live data in this toast relation, and
sum(octet_length()) returns quite rapidly to confirm it.  Although that
meant there was a lot of bloat to begin with (there was probably a large
delete at some point), predictably the usage of disk was completely stable
for a very long time as heap was re-used, and there were no problems.

We have tried adjusting the autovacuum cost parameters to be very aggressive
(1 cost-limit) and also ran a manual vacuum verbose for some time and
got some output, even though we had to cancel it after a a few hours because
it was causing some problems.  Yet, it does claim to be marking roughly 20GB
at a time as free.  In spite of that, there is no apparent inflection at all
to the rate of bloat accumulation, which seems to be roughly the rate of
information churn.

Our main workload is a series of:

  UPDATE tbl SET str = str || 'more string literal';

Periodically, this field is removed via nullification:

  UPDATE tbl SET str = NULL;

The vacuum trace is as follows.


INFO:  vacuuming "pg_toast.pg_toast_16394"
INFO:  scanned index "pg_toast_16394_index" to remove 11184521 row versions
DETAIL:  CPU 8.42s/29.90u sec elapsed 318.26 sec.
INFO:  "pg_toast_16394": removed 11184521 row versions in 2494386 pages
DETAIL:  CPU 40.10s/18.78u sec elapsed 1334.48 sec.
INFO:  scanned index "pg_toast_16394_index" to remove 11184522 row versions
DETAIL:  CPU 7.34s/30.13u sec elapsed 263.07 sec.
INFO:  "pg_toast_16394": removed 11184522 row versions in 2476585 pages
DETAIL:  CPU 33.62s/18.83u sec elapsed 661.06 sec.
INFO:  scanned index "pg_toast_16394_index" to remove 11184521 row versions
DETAIL:  CPU 7.29s/27.62u sec elapsed 235.69 sec.
INFO:  "pg_toast_16394": removed 11184521 row versions in 2461097 pages
DETAIL:  CPU 34.35s/18.74u sec elapsed 669.34 sec.
INFO:  scanned index "pg_toast_16394_index" to remove 11184522 row versions
DETAIL:  CPU 7.25s/26.05u sec elapsed 233.71 sec.
INFO:  "pg_toast_16394": removed 11184522 row versions in 2473206 pages
DETAIL:  CPU 35.11s/19.04u sec elapsed 652.17 sec.
INFO:  scanned index "pg_toast_16394_index" to remove 11184521 row versions
DETAIL:  CPU 7.55s/23.78u sec elapsed 238.95 sec.
INFO:  "pg_toast_16394": removed 11184521 row versions in 2470127 pages
DETAIL:  CPU 35.33s/19.17u sec elapsed 657.84 sec.
INFO:  scanned index "pg_toast_16394_index" to remove 11184523 row versions
DETAIL:  CPU 7.53s/22.20u sec elapsed 230.27 sec.
INFO:  "pg_toast_16394": removed 11184523 row versions in 2565300 pages
DETAIL:  CPU 36.63s/19.53u sec elapsed 696.75 sec.
INFO:  scanned index "pg_toast_16394_index" to remove 11184523 row versions
DETAIL:  CPU 8.35s/19.89u sec elapsed 243.68 sec.
INFO:  "pg_toast_16394": removed 11184523 row versions in 2529355 pages
DETAIL:  CPU 35.89s/19.67u sec elapsed 640.45 sec.
INFO:  scanned index "pg_toast_16394_index" to remove 11184523 row versions
DETAIL:  CPU 8.25s/18.06u sec elapsed 270.73 sec.
INFO:  "pg_toast_16394": removed 11184523 row versions in 2493637 pages
DETAIL:  CPU 37.06s/20.70u sec elapsed 699.94 sec.
INFO:  scanned index "pg_toast_16394_index" to remove 11184523 row versions
DETAIL:  CPU 9.12s/16.02u sec elapsed 264.31 sec.
INFO:  "pg_toast_16394": removed 11184523 row versions in 2489705 pages
DETAIL:  CPU 35.51s/21.36u sec elapsed 655.56 sec.
^CCancel request sent
ERROR:  canceling statement due to user request



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #7498: Questionable interval parsing when in the form 'n m days'

2012-08-15 Thread daniel
The following bug has been logged on the website:

Bug reference:  7498
Logged by:  Daniel Farina
Email address:  dan...@heroku.com
PostgreSQL version: 9.1.4
Operating system:   Ubuntu 12.04
Description:

This is an expression that should probably have an error and not evaluate to
"true":

select '1 5 hours'::interval = '1 day 5 hours'::interval;
 ?column? 
--
 t
(1 row)

I think that the first spelling, a unit-less '1', should not be accepted. 
Other unit combinations agree:

select '1 5 minutes'::interval;
ERROR:  invalid input syntax for type interval: "1 5 minutes"
LINE 1: select '1 5 minutes'::interval;
   ^
select '1 5 months'::interval;
ERROR:  invalid input syntax for type interval: "1 5 months"
LINE 1: select '1 5 months'::interval;
   ^
select '1 5 seconds'::interval;
ERROR:  invalid input syntax for type interval: "1 5 seconds"
LINE 1: select '1 5 seconds'::interval;




-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #7510: Very bad costing estimation on gin vs gist with FTS

2012-08-29 Thread daniel
The following bug has been logged on the website:

Bug reference:  7510
Logged by:  Daniel Farina
Email address:  dan...@heroku.com
PostgreSQL version: 9.1.4
Operating system:   Ubuntu 10.04
Description:

Summary: Planner chooses GiST even if GIN is much better.

We have a table that we decided to use GiST-based full text search on, but
received terrible performance.  It's not a very big table, nor are the
tsvectors very large -- we FTS a tiny bit of text and throw in a few
identifiers at our choosing to enable a search in an application of ours.

The root cause of that is that GiST is terrible when using prefix matching
operators on tsvectors (the ":*" operator in the search language), which is
what one nominally wants for incremental search.

  Upon doing "EXPLAIN (ANALYZE, BUFFERS, FORMAT YAML)", this is what the the
plan looks like:


-
 - Plan:
   +
 Node Type: "Limit" 
   +
 Startup Cost: 0.00 
   +
 Total Cost: 14.48  
   +
 Plan Rows: 6   
   +
 Plan Width: 240
   +
 Actual Startup Time: 499.515   
   +
 Actual Total Time: 499.515 
   +
 Actual Rows: 0 
   +
 Actual Loops: 1
   +
 Shared Hit Blocks: 269246  
   +
 Shared Read Blocks: 0  
   +
 Shared Written Blocks: 0   
   +
 Local Hit Blocks: 0
   +
 Local Read Blocks: 0   
   +
 Local Written Blocks: 0
   +
 Temp Read Blocks: 0
   +
 Temp Written Blocks: 0 
   +
 Plans: 
   +
   - Node Type: "Index Scan"
   +
 Parent Relationship: "Outer"   
   +
 Scan Direction: "NoMovement"   
   +
 Index Name: "resources_text_searchable_idx"
   +
 Relation Name: "resources" 
   +
 Alias: "resources" 
   +
 Startup Cost: 0.00 
   +
 Total Cost: 14.48  
   +
 Plan Rows: 6   
   +
 Plan Width: 240
   +
 Actual Startup Time: 499.511   
   +
 Actual Total Time: 499.511 
   +
 Actual Rows: 0 
   +
 Actual Loops: 1    
   +
 Index Cond: "(search_document @@ '''daniel'':* &
''heroku.c'':*'::tsquery)"+
 Shared Hit Blocks: 269246  
   +
 Shared Read Blocks: 0  
   +
 Shared Written Blocks: 0   
   +
 Local Hit Blocks: 0
   +
 Local Read Blocks: 0   
   +
 Local Written Blocks: 0
   +
 Temp Read Blocks: 0
   +
 Temp Written Blocks: 0 
   +
   Triggers:
   +
   Total Runtime: 499.571

What's notable here is that the shared hit blocks is about 2GB worth of
data.  The index per \di+ is only about 350MB, and the table per \dt+ is
onl

[BUGS] BUG #7648: Momentary index corruption while in hot standby

2012-11-09 Thread daniel
The following bug has been logged on the website:

Bug reference:  7648
Logged by:  Daniel Farina
Email address:  dan...@heroku.com
PostgreSQL version: 9.0.9
Operating system:   Ubuntu 10.04
Description:

At more or less one point in time exactly (the same second, at minimum), a
couple of errors were raised on a hot standby while performing query access
that would nominally suggest corruption were raised:

PGError: ERROR: could not read block 556642 in file "base/16385/2904143.4":
read only 0 of 8192 

Immediately thereafter, no occurrences of the error resurfaced and any
attempts to reproduce the issue with an identical query were met with
failure.

On investigation, this relfile is a fairly common beast: an integer btree
index being used for row identification (e.g. serial).

This is reading from a 9.0.8 Postgres.



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #7753: Cannot promote out of hot standby

2012-12-12 Thread daniel
The following bug has been logged on the website:

Bug reference:  7753
Logged by:  Daniel Farina
Email address:  dan...@heroku.com
PostgreSQL version: 9.1.6
Operating system:   Ubuntu 10.04
Description:

Touching a trigger file will not cause promotion out of hot standby.
Basically, an apparently normally-working hot-standby database will
not leave hot standby.  The database emitting WAL is version 9.1.4.

Everything appears normal in the log (downloads and restoring of
archived segments), and the server seems to take no notice of the
trigger file.

To force the issue, I introduced an error into the configuration of
the restoration program to cause it to exit.  Normally that's no
problem; postgres would just keep on trying to restore a segment over
and over until the error is fixed.

Instead, the server crashes:

 [413-1]  [COPPER] LOG:  restored log file "0001034D0050" from
archive
wal_e.worker.s3_worker INFO MSG: completed download and
decompression#012DETAIL: Downloaded and decompressed
"s3://archive-root/wal_005/0001034D0051.lzo" to
"pg_xlog/RECOVERYXLOG"
 [414-1]  [COPPER] LOG:  restored log file "0001034D0051" from
archive
wal_e.worker.s3_worker INFO MSG: completed download and
decompression#012DETAIL: Downloaded and decompressed
"s3://archive-root/wal_005/0001034D0052.lzo" to
"pg_xlog/RECOVERYXLOG"
 [415-1]  [COPPER] LOG:  restored log file "0001034D0052" from
archive

# I introduce the failure here

wal_e.main   ERRORMSG: no AWS_SECRET_ACCESS_KEY defined#012HINT:
Define the environment variable AWS_SECRET_ACCESS_KEY.
LOG:  trigger file found: /etc/postgresql/wal-e.d/pull-env/STANDBY_OFF
LOG:  redo done at 34D/52248590
LOG:  last completed transaction was at log time 2012-12-10 wal_e.main  
ERRORMSG: no AWS_SECRET_ACCESS_KEY defined#012HINT: Define the
environment variable AWS_SECRET_ACCESS_KEY.
PANIC:  could not open file "pg_xlog/0001034D0052" (log file
845, segment 82): No such file or directory
LOG:  startup process (PID 7) was terminated by signal 6: Aborted
LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
abnormally and possibly corrupted shared memory.
DETAIL:  The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
abnormally and possibly corrupted shared memory.

I can fix the configuration and restart the server, and everything is
as fine as before.  Next, I try removing recovery.conf and restarting
the server as an alternative way of promoting...but, no avail;
however, a slightly different error message:

# Server begins starting
LOG:  loaded library "auto_explain"
LOG:  loaded library "pg_stat_statements"
LOG:  database system was interrupted while in recovery at log time
2012-12-10 15:20:03 UTC
HINT:  If this has occurred more than once some data might be corrupted and
you might need to choose an earlier recovery target.
LOG:  could not open file "pg_xlog/0001034E001A" (log file 846,
segment 26): No such file or directory
LOG:  invalid primary checkpoint record
LOG:  could not open file "pg_xlog/0001034D00F2" (log file 845,
segment 242): No such file or directory
LOG:  invalid secondary checkpoint record
PANIC:  could not locate a valid checkpoint record
LOG:  startup process (PID 7) was terminated by signal 6: Aborted
LOG:  aborting startup due to startup process failure
main process (24284) terminated with status 1

pg_control looks like this around the same time, for reference:

pg_control version number:903
Catalog version number:   201105231
Database cluster state:   in archive recovery
pg_control last modified: Wed 12 Dec 2012 09:22:30 PM UTC
Latest checkpoint location:   351/1FE194C0
Prior checkpoint location:351/FD64A78
Latest checkpoint's REDO location:351/131848C8
Latest checkpoint's TimeLineID:   1
Latest checkpoint's NextXID:  0/652342033
Latest checkpoint's NextOID:  103224
Latest checkpoint's NextMultiXactId:  1
Latest checkpoint's NextMultiOffset:  0
Latest checkpoint's oldestXID:455900714
Latest checkpoint's oldestXID's DB:   16385
Latest checkpoint's oldestActiveXID:  652311442
Time of latest checkpoint:Mon 10 Dec 2012 07:19:23 PM UTC
Minimum recovery ending location: 351/4BFFFE20
Backup start location:0/0
Current wal_level sett

[BUGS] BUG #8058: CLUSTER and VACUUM FULL fail to free space

2013-04-12 Thread daniel
The following bug has been logged on the website:

Bug reference:  8058
Logged by:  Daniel Farina
Email address:  dan...@heroku.com
PostgreSQL version: 9.0.13
Operating system:   Ubuntu 10.04
Description:

We have a somewhat high-churn table acting as a queue, and over time it's
grown to be something like a gigabyte.  I surmised it might be vanilla
bloat, but the truth seems somewhat more exotic because both VACUUM FULL and
CLUSTER generated absolutely no new free space.

In the end, ALTER TABLE and CREATE TABLE ... (LIKE) ran nearly instantly and
got the table size down to a few hundred K from 900M.

This caused quite a few problems because would normally be cheap index scan
over a mere 100 tuples were taking a few seconds.

There are TOASTed fields on this table, ranging in a few hundred bytes of
text per attribute.

We have retained the old bloated table so we can poke at it.



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #4313: Strange optimizer behaviour

2008-07-17 Thread Daniel Podlejski

The following bug has been logged online:

Bug reference:  4313
Logged by:  Daniel Podlejski
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.3.1, 8.3.3
Operating system:   Linux
Description:Strange optimizer behaviour
Details: 

cvalley_dev=# \d messages
 Table "public.messages"
   Column   |Type |   Modifiers
+-+-
--
 id | integer | not null default
nextval('messages_id_seq'::regclass)
 sender_id  | integer | not null
 rcptto_id  | integer | not null
 subject| text|
 body   | text|
 read   | boolean | not null default false
 deleted| boolean | not null default false
 created_at | timestamp without time zone |
 updated_at | timestamp without time zone |
Indexes:
"messages_pkey" PRIMARY KEY, btree (id)

cvalley_dev=# EXPLAIN SELECT * FROM messages WHERE (messages."id" = 11);

QUERY PLAN

---
 Index Scan using messages_pkey on messages  (cost=0.00..8.35 rows=1
width=51)
   Index Cond: (id = 11)
(2 rows)

cvalley_dev=# EXPLAIN SELECT * FROM messages WHERE (messages."id" =
11);
  QUERY PLAN

-
 Seq Scan on messages  (cost=0.00..23400.56 rows=4588 width=51)
   Filter: ((id)::numeric =
11::numeric)
(2 rows)

I think there is no sense to cast too big value to numeric when field type
is integer.
On really big table this "bug" cause unnecessary io load.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #4374: pg_restore does not restore public schema comment

2008-08-25 Thread Daniel Migowski

The following bug has been logged online:

Bug reference:  4374
Logged by:  Daniel Migowski
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.3.3
Operating system:   Windows
Description:pg_restore does not restore public schema comment
Details: 

Hello dear developers,

Currently a schema dump (custom format, containing the public schema
comment) can be restored, but the public schema version is not restored.

I assume you check if a schema already exists and the skip the schema part.
Please don't skip setting the comment, since we use it for versioning
purposes, and restoring from a dump breaks it.

With best regards,
Daniel Migowski

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #4375: pg_dump documentation, -Z option

2008-08-25 Thread Daniel Migowski

The following bug has been logged online:

Bug reference:  4375
Logged by:  Daniel Migowski
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.3.1
Operating system:   Linux
Description:pg_dump documentation, -Z option
Details: 

The option documentation misses the fact that you can produce gzipped output
files with text format.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #4374: pg_restore does not restore public schema comment

2008-08-26 Thread Daniel Migowski

Hallo Craig,

Craig Ringer schrieb:

So: the user's report is incorrect in blaming pg_restore, but correct in
that comments on the public schema (and presumably other default schema)
aren't preserved by pg_dump | pg_restore. The real reason appears to be
that they're not dumped in the first place.
  
I do a dump on Linux Postgres 8.3.1 in custom format. When I try to read 
it (gzip -d > myfile;less myfile) i find a string like "COMMENT ON 
SCHEMA public IS 'mycomment'". So I assumed that pg_dump actually dumps 
it. But I don't know why there is a "DROP SCHEMA public" in the file. 
Thats why I assumed pg_restore fails here.

I haven't checked to see if a custom dump behaves differently.

Craig Ringer
  

I didn't check the plain text format :).

Daniel Migowski

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Set-returning functions only allowed if written in language 'sql'

2009-02-10 Thread Daniel Migowski

Hello dear PostgreSQL developers,

I noticed the following strange behaviour with set-returning functions. 
If sets are allowed seems to depend on the language the function is 
written in, what makes conpletely no sense to me. See the following 
functions x() and y(). x() is written in 'sql' and works, y() is written 
is plpgsql and fails. Any reasons for this I do not realize?


With best regards,
Daniel Migowski



CREATE FUNCTION x() RETURNS SETOF int4 AS
$$
   SELECT 1
   UNION
   SELECT 2
$$
LANGUAGE 'sql';
SELECT x(); -- fine with two result rows.

CREATE FUNCTION y() RETURNS SETOF int4 AS
$$
BEGIN
   RETURN NEXT 1;
   RETURN NEXT 2;
END
$$
LANGUAGE 'plpgsql';
SELECT y(); -- fails with:
FEHLER: Funktion mit Mengenergebnis in einem Zusammenhang aufgerufen, 
der keine Mengenergebnisse verarbeiten kann

SQL Status:0A000
Kontext:PL/pgSQL function "y" line 2 at RETURN NEXT


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #4733: Feature request: add plpy.query_plan(...) to plpythonu

2009-03-25 Thread Daniel Miller

The following bug has been logged online:

Bug reference:  4733
Logged by:  Daniel Miller
Email address:  dan...@keystonewood.com
PostgreSQL version: 8.x
Operating system:   N/A
Description:Feature request: add plpy.query_plan(...) to plpythonu
Details: 

I have coded a function that I find very useful in plpythonu functions. The
advantage of this function is that it prepares a query plan and returns a
python function that can simply be called with the necessary arguments to
execute the query.

Could this be added as a function like plpy.execute(...) and
plpy.prepare(...) ?



def query_plan(query, *argtypes, **flags):
"""Prepare a query plan and store it in the static data (SD) dict

Arguments:
query - the query to prepare
*argtypes - argument type names (example: "int4", "text", "bool",
etc.)

returns a function that takes arguments corresponding to the given
argtypes.
The function also takes an optional 'limit' keyword argument. When
called,
the function will execute the query and return the query result object.

"""
if query in SD:
return SD[query]
plan = plpy.prepare(query, argtypes)
def exec_query(*args, **kw):
if "limit" in kw:
limit = (kw.pop("limit"),)
else:
limit = ()
if kw:
raise TypeError("unexpected keyword arguments: %s" % ",
".join(kw))
    return plpy.execute(plan, list(args), *limit)
SD[query] = exec_query
return exec_query



Thanks.

~ Daniel

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Query with aggregate over subselects fails with ERROR: plan should not reference subplan's variable

2009-04-24 Thread Daniel Grace
The following nonsensical query causes PostgreSQL to fail with ERROR: plan
should not reference subplan's variable.  (This was stripped down from an
'useful' query that triggered the same bug).  First encountered on 8.3.4,
reproduced on 8.3.7

BEGIN;

CREATE SCHEMA bug_schema;
SET SEARCH_PATH='bug_schema';

CREATE FUNCTION AGG_GROUP_CONCAT_SFUNC(IN _state TEXT, IN _str TEXT, IN _sep
TEXT) RETURNS TEXT
SECURITY INVOKER
LANGUAGE PLPGSQL
IMMUTABLE
CALLED ON NULL INPUT
AS $PROC$
BEGIN
IF _str IS NULL THEN RETURN _state; END IF;
IF _state IS NULL THEN RETURN _str; END IF;
RETURN _state || _sep || _str;
END;
$PROC$;


CREATE AGGREGATE GROUP_CONCAT(TEXT, TEXT) (
STYPE = TEXT,
SFUNC = AGG_GROUP_CONCAT_SFUNC
);

CREATE TABLE foo (
idserialNOT NULL,
fnamevarchar(64)NOT NULL,
PRIMARY KEY (id)
);

-- Fails: ERROR:  plan should not reference subplan's variable
SELECT (SELECT GROUP_CONCAT((SELECT s2.fname FROM foo AS s2 WHERE
s2.id=s.idORDER BY fname), '; ')) AS foolist FROM foo AS s;

-- Also fails, same error
SELECT (SELECT MAX((SELECT s2.fname FROM foo AS s2 WHERE s2.id=s.id ORDER BY
fname))) AS foomaxFROM foo AS s;

ROLLBACK;


Re: [BUGS] Query with aggregate over subselects fails with ERROR: plan should not reference subplan's variable

2009-04-24 Thread Daniel Grace
On Fri, Apr 24, 2009 at 5:38 PM, Tom Lane  wrote:

> Daniel Grace  writes:
> > The following nonsensical query causes PostgreSQL to fail with ERROR:
> plan
> > should not reference subplan's variable.  (This was stripped down from an
> > 'useful' query that triggered the same bug).  First encountered on 8.3.4,
> > reproduced on 8.3.7
>
> Hmmm ... I guess something is getting confused about the level of query
> nesting.  FWIW, you can avoid the bug in these two examples by omitting
> the inner "SELECT" keyword, which is useless anyway.  Perhaps it is
> needed in your real query though ...
>
>regards, tom lane
>
It's required in my case to force the aggregate function to evaluate its
inputs in a set order.  I'm trying to replace MySQL's GROUP_CONCAT function,
including the ORDER BY option.

I had another variation (that did not use sub-SELECTs, but instead joining
something along the lines of joining (SELECT * FROM foo ORDER BY fname) AS
foo that partially worked -- however, it had the side effect (due to the
nature of the query) of having some duplicate data and not the type that
could be fixed simply by adding DISTINCT.

I'm not going to spam the list with all of the table definitions for the
real query, but I will paste it by itself to give a better idea of what I
was originally attempting:


SELECT
s.fid, c.flags, c.id, c.title, cs.ut_start, cs.ut_end, cr.full_name,
cal.title AS cancel_reason,
(SELECT GROUP_CONCAT((SELECT s2.fname FROM student AS s2 WHERE s2.id=
s.id ORDER BY fname), '; ')) AS students,
(SELECT GROUP_CONCAT((SELECT p.gname FROM course_teacher AS ct INNER
JOIN person AS p ON ct.tid=p.id WHERE ct.cid=c.id ORDER BY p.gname), '; '))
AS teacher
FROM
student AS s
INNER JOIN student_course_session AS scs ON scs.sid=s.id
INNER JOIN course_session AS cs ON cs.id=scs.csid
INNER JOIN course AS c ON c.id=cs.cid
LEFT JOIN course_room AS cr ON cr.id=c.room_id
LEFT JOIN calendar AS cal ON cal.id=cs.cancelled_by
GROUP BY s.fid, cs.id, c.flags, c.id, c.title, cs.ut_start, cs.ut_end,
cr.full_name, cal.title, cs.tsstart
ORDER BY s.fid, cs.tsstart, c.title;

-- Daniel Grace


Re: [BUGS] Query with aggregate over subselects fails with ERROR: plan should not reference subplan's variable

2009-04-24 Thread Daniel Grace
No luck there either (runs, but with incorrect results), but since I know
this isn't a support list and is a bugs list I just would like to point out
that: Even though what I was doing that triggered the bug is apparently
incorrect and 'silly', it's still possible that some complicated legitimate
query might trigger the same problem -- so it may be worth looking into.

On Fri, Apr 24, 2009 at 7:19 PM, Tom Lane  wrote:

> I wrote:
> > I'm talking about the underlined SELECT, not the one inside the
> > aggregate.  AFAICS this one is totally useless.
>
> Oh, wait.  It is useless in the query as written, but now that I think
> twice about what you're trying to accomplish, you do need three levels
> of SELECT keywords.  Just not like that.  I think what you actually
> want is
>
> SELECT
>...
>(SELECT GROUP_CONCAT(t.fname, '; ') FROM
>   (SELECT s2.fname FROM student AS s2
>WHERE s2.id=s.id ORDER BY fname) AS t) AS students,
>...
> FROM
>student AS s
>
> What you wrote instead is just wrong --- it would fail if there were
> multiple students with the same id (can that actually happen?
> Maybe there's more wrong with this query...), because what you
> wrote is a scalar sub-SELECT inside an aggregate call that belongs
> to the outermost query.
>
>regards, tom lane
>



-- 
Daniel Grace
AGE, LLC
System Administrator and Software Developer
dgr...@wingsnw.com // (425)327-0079 // www.wingsnw.com


Re: [BUGS] Query with aggregate over subselects fails with ERROR: plan should not reference subplan's variable

2009-04-25 Thread Daniel Grace
On Sat, Apr 25, 2009 at 9:52 AM, Tom Lane  wrote:

> Daniel Grace  writes:
> > The following nonsensical query causes PostgreSQL to fail with ERROR:
> plan
> > should not reference subplan's variable.  (This was stripped down from an
> > 'useful' query that triggered the same bug).  First encountered on 8.3.4,
> > reproduced on 8.3.7
>
> Patch is here:
> http://archives.postgresql.org/pgsql-committers/2009-04/msg00277.php
>
> I still think that it won't affect you once you have the query logic
> straight, though.
>
Thanks for the info.  I've since fixed the query, so you are right in that
regard ;)

-- Daniel Grace


[BUGS] BUG #4971: Backend crash while doing nothing...

2009-08-09 Thread Daniel Migowski

The following bug has been logged online:

Bug reference:  4971
Logged by:  Daniel Migowski
Email address:  dmigow...@ikoffice.de
PostgreSQL version: 8.3.5
Operating system:   Windows XP
Description:Backend crash while doing nothing...
Details: 

Hello, poor dear debuggers,

I just found my Postgres server silent when I tried to connect again to my
lovely server, and noticed it had been shut down with the following message
in the log:

2009-08-09 19:57:01 CEST [2208] LOG:  server process (PID 8384) was
terminated by exception 0xC13A
2009-08-09 19:57:01 CEST [2208] HINT:  See C include file "ntstatus.h" for a
description of the hexadecimal value.
2009-08-09 19:57:01 CEST [2208] LOG:  terminating any other active server
processes
2009-08-09 19:57:01 CEST [2208] LOG:  all server processes terminated;
reinitializing
2009-08-09 19:57:02 CEST [2208] FATAL:  pre-existing shared memory block is
still in use
2009-08-09 19:57:02 CEST [2208] HINT:  Check if there are any old server
processes still running, and terminate them.

I had some Localization issues some time ago, so even if its not important
currently, all my locales are tuned to DE.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Memory error

2009-08-27 Thread Daniel Weinberger
Yesterday I ran an SQL import.  It failed after 3.33 hours with following
message:



ERROR:  out of memory

DETAIL:  Failed on request of size 32.



** Error **



ERROR: out of memory

SQL state: 53200
Detail: Failed on request of size 32.

The log file is as follows;

2009-08-26 18:43:40 PDT   LOG:  checkpoints are occurring too frequently (28
seconds apart)
2009-08-26 18:43:40 PDT   HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2009-08-26 18:45:10 PDT   LOG:  checkpoints are occurring too frequently (28
seconds apart)
2009-08-26 18:45:10 PDT   HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2009-08-26 18:45:36 PDT   LOG:  checkpoints are occurring too frequently (26
seconds apart)
2009-08-26 18:45:36 PDT   HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2009-08-26 18:50:20 PDT   LOG:  checkpoints are occurring too frequently (27
seconds apart)
2009-08-26 18:50:20 PDT   HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2009-08-26 18:51:21 PDT   LOG:  checkpoints are occurring too frequently (27
seconds apart)
2009-08-26 18:51:21 PDT   HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2009-08-26 18:54:01 PDT   LOG:  checkpoints are occurring too frequently (27
seconds apart)
2009-08-26 18:54:01 PDT   HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2009-08-26 18:54:27 PDT   LOG:  checkpoints are occurring too frequently (26
seconds apart)
2009-08-26 18:54:27 PDT   HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2009-08-26 18:54:51 PDT   LOG:  checkpoints are occurring too frequently (24
seconds apart)
2009-08-26 18:54:51 PDT   HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2009-08-26 18:55:17 PDT   LOG:  checkpoints are occurring too frequently (26
seconds apart)
2009-08-26 18:55:17 PDT   HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2009-08-26 19:06:04 PDT   LOG:  checkpoints are occurring too frequently (26
seconds apart)
2009-08-26 19:06:04 PDT   HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2009-08-26 19:12:39 PDT   LOG:  checkpoints are occurring too frequently (29
seconds apart)
2009-08-26 19:12:39 PDT   HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2009-08-26 19:13:08 PDT   LOG:  checkpoints are occurring too frequently (29
seconds apart)
2009-08-26 19:13:08 PDT   HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2009-08-26 19:14:13 PDT   LOG:  checkpoints are occurring too frequently (28
seconds apart)
2009-08-26 19:14:13 PDT   HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2009-08-26 19:15:47 PDT   LOG:  checkpoints are occurring too frequently (29
seconds apart)
2009-08-26 19:15:47 PDT   HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
2009-08-26 19:16:09 PDT   LOG:  checkpoints are occurring too frequently (22
seconds apart)
2009-08-26 19:16:09 PDT   HINT:  Consider increasing the configuration
parameter "checkpoint_segments".
TopMemoryContext: 49416 total in 6 blocks; 4768 free (5 chunks); 44648 used
  TopTransactionContext: 8192 total in 1 blocks; 7776 free (0 chunks); 416
used
AfterTriggerEvents: 317710336 total in 49 blocks; 1312 free (42 chunks);
317709024 used
  Operator class cache: 8192 total in 1 blocks; 3848 free (0 chunks); 4344
used
  Operator lookup cache: 24576 total in 2 blocks; 14072 free (6 chunks);
10504 used
  MessageContext: 131072 total in 5 blocks; 13008 free (4 chunks); 118064
used
  smgr relation table: 8192 total in 1 blocks; 2808 free (0 chunks); 5384
used
  TransactionAbortContext: 32768 total in 1 blocks; 32752 free (0 chunks);
16 used
  Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
  PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 2048 total in 1 blocks; 888 free (0 chunks); 1160 used
  ExecutorState: 65592 total in 4 blocks; 22888 free (45 chunks); 42704
used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
HashTableContext: 8192 total in 1 blocks; 8096 free (0 chunks); 96
used
  HashBatchContext: 2113560 total in 10 blocks; 915344 free (8
chunks); 1198216 used
HashTableContext: 8192 total in 1 blocks; 8144 free (1 chunks); 48
used
  HashBatchContext: 2113560 total in 10 blocks; 434960 free (8
chunks); 1678600 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 

[BUGS] BUG #5238: frequent signal 11 segfaults

2009-12-10 Thread Daniel Nagy

The following bug has been logged online:

Bug reference:  5238
Logged by:  Daniel Nagy
Email address:  nagy.dan...@telekom.hu
PostgreSQL version: 8.4.1
Operating system:   Debian Lenny 5.0.3 x86_64. Kernel: 2.6.31.6-grsec
Description:frequent signal 11 segfaults
Details: 

I got postgres segfaults several times a day.
Postgres log:
Dec  9 21:15:07 goldbolt postgres[4515]: [292-1] user=,db= LOG:  0:
server process (PID 8354) was terminated by signal 11: Segmentation fault
Dec  9 21:15:07 goldbolt postgres[4515]: [292-2] user=,db= LOCATION: 
LogChildExit, postmaster.c:2725
Dec  9 21:15:07 goldbolt postgres[4515]: [293-1] user=,db= LOG:  0:
terminating any other active server processes
Dec  9 21:15:07 goldbolt postgres[4515]: [293-2] user=,db= LOCATION: 
HandleChildCrash, postmaster.c:2552

dmesg output:
postmaster[8354]: segfault at 7fbfbde42ee2 ip 004534d0 sp
7fff4b220f90 error 4 in postgres[40+446000]
grsec: Segmentation fault occurred at 7fbfbde42ee2 in
/usr/local/postgres-8.4.1/bin/postgres[postmaster:8354] uid/euid:111/111
gid/egid:114/114, parent
/usr/local/postgres-8.4.1/bin/postgres[postmaster:4515] uid/euid:111/111
gid/egid:114/114

Notes:
- Postgres was built from source with --enable-thread-safety
- Tried several kernels, no luck
- I have psql on a different hardware, same problem happens there too
- There are no sign of HW (memory, disk) errors
- No other daemons (apache, nginx) segfault, only postgres

The binaries are not stripped, how can I help finding the cause?

Thanks a lot,

Daniel

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5238: frequent signal 11 segfaults

2009-12-12 Thread Nagy Daniel
Hi Guys,

Here you are:

na...@goldbolt:~$ gdb /usr/local/pgsql/bin/postgres core
GNU gdb 6.8-debian
...
warning: Can't read pathname for load map: Input/output error.
Reading symbols from /lib/libdl.so.2...done.
Loaded symbols for /lib/libdl.so.2
Reading symbols from /lib/libm.so.6...done.
Loaded symbols for /lib/libm.so.6
Reading symbols from /lib/libc.so.6...done.
Loaded symbols for /lib/libc.so.6
Reading symbols from /lib/ld-linux-x86-64.so.2...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /lib/libnss_compat.so.2...done.
Loaded symbols for /lib/libnss_compat.so.2
Reading symbols from /lib/libnsl.so.1...done.
Loaded symbols for /lib/libnsl.so.1
Reading symbols from /lib/libnss_nis.so.2...done.
Loaded symbols for /lib/libnss_nis.so.2
Reading symbols from /lib/libnss_files.so.2...done.
Loaded symbols for /lib/libnss_files.so.2
Core was generated by `postgres: randir lovehunter 127.0.0.1(33247)
SELECT   '.
Program terminated with signal 11, Segmentation fault.
[New process 11764]
#0  0x00453415 in slot_deform_tuple ()
(gdb)
(gdb) backtrace
#0  0x00453415 in slot_deform_tuple ()
#1  0x0045383a in slot_getattr ()
#2  0x00550dac in ExecHashGetHashValue ()
#3  0x00552a98 in ExecHashJoin ()
#4  0x00543368 in ExecProcNode ()
#5  0x00552aa6 in ExecHashJoin ()
#6  0x00543368 in ExecProcNode ()
#7  0x00552aa6 in ExecHashJoin ()
#8  0x00543368 in ExecProcNode ()
#9  0x00557251 in ExecSort ()
#10 0x00543290 in ExecProcNode ()
#11 0x00555308 in ExecMergeJoin ()
#12 0x00543380 in ExecProcNode ()
#13 0x00557251 in ExecSort ()
#14 0x00543290 in ExecProcNode ()
#15 0x00540e92 in standard_ExecutorRun ()
#16 0x005ecc27 in PortalRunSelect ()
#17 0x005edfd9 in PortalRun ()
#18 0x005e93a7 in exec_simple_query ()
#19 0x005ea977 in PostgresMain ()
#20 0x005bf2a8 in ServerLoop ()
#21 0x005c0037 in PostmasterMain ()
#22 0x00569b48 in main ()
Current language:  auto; currently asm



Thanks,

Daniel


Craig Ringer wrote:
> On 10/12/2009 5:12 AM, Daniel Nagy wrote:
>> The following bug has been logged online:
>>
>> Bug reference:      5238
>> Logged by:  Daniel Nagy
>> Email address:  nagy.dan...@telekom.hu
>> PostgreSQL version: 8.4.1
>> Operating system:   Debian Lenny 5.0.3 x86_64. Kernel: 2.6.31.6-grsec
>> Description:frequent signal 11 segfaults
>> Details:
>>
>> I got postgres segfaults several times a day.
> 
> 
> http://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD
> 
> --
> Craig Ringer

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5238: frequent signal 11 segfaults

2009-12-12 Thread Nagy Daniel
(gdb) p debug_query_string
$1 = 12099472

Now I recompiled pg with --enable-debug and waiting for a new core dump.
I'll post the backtrace and the debug_query_string output ASAP.

Please let me know if there is anything more I can do.

Thanks,

Daniel



Tom Lane wrote:
> Nagy Daniel  writes:
>> (gdb) backtrace
>> #0  0x00453415 in slot_deform_tuple ()
>> #1  0x0045383a in slot_getattr ()
>> #2  0x00550dac in ExecHashGetHashValue ()
>> #3  0x00552a98 in ExecHashJoin ()
>> #4  0x00543368 in ExecProcNode ()
>> #5  0x00552aa6 in ExecHashJoin ()
>> #6  0x00543368 in ExecProcNode ()
> 
> Not terribly informative (these binaries are apparently not as
> un-stripped as you thought).  However, this suggests it's a specific
> query going wrong --- "p debug_query_string" in gdb might tell you what.
> Please see if you can extract a test case.
> 
>   regards, tom lane

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5238: frequent signal 11 segfaults

2009-12-13 Thread Nagy Daniel
Here's a better backtrace:

(gdb) bt
#0  slot_deform_tuple (slot=0xc325b8, natts=21) at heaptuple.c:1130
#1  0x004535f0 in slot_getsomeattrs (slot=0xc325b8, attnum=21)
at heaptuple.c:1340
#2  0x00543cc6 in ExecProject (projInfo=0xc44c98,
isDone=0x7fffe33f30a4) at execQual.c:5164
#3  0x005528fb in ExecHashJoin (node=0xc3f130) at nodeHashjoin.c:282
#4  0x00543368 in ExecProcNode (node=0xc3f130) at execProcnode.c:412
#5  0x00552aa6 in ExecHashJoin (node=0xc3dc90) at nodeHashjoin.c:598
#6  0x00543368 in ExecProcNode (node=0xc3dc90) at execProcnode.c:412
#7  0x00552aa6 in ExecHashJoin (node=0xc37140) at nodeHashjoin.c:598
#8  0x00543368 in ExecProcNode (node=0xc37140) at execProcnode.c:412
#9  0x00557251 in ExecSort (node=0xc37030) at nodeSort.c:102
#10 0x00543290 in ExecProcNode (node=0xc37030) at execProcnode.c:423
#11 0x00555308 in ExecMergeJoin (node=0xc36220) at
nodeMergejoin.c:626
#12 0x00543380 in ExecProcNode (node=0xc36220) at execProcnode.c:408
#13 0x00557251 in ExecSort (node=0xc34000) at nodeSort.c:102
#14 0x00543290 in ExecProcNode (node=0xc34000) at execProcnode.c:423
#15 0x00540e92 in standard_ExecutorRun (queryDesc=0xbc1c10,
direction=ForwardScanDirection, count=0) at execMain.c:1504
#16 0x005ecc27 in PortalRunSelect (portal=0xc30160,
forward=, count=0, dest=0x7f7219b7f3e8)
at pquery.c:953
#17 0x005edfd9 in PortalRun (portal=0xc30160,
count=9223372036854775807, isTopLevel=1 '\001', dest=0x7f7219b7f3e8,
altdest=0x7f7219b7f3e8, completionTag=0x7fffe33f3620 "") at pquery.c:779
#18 0x005e93a7 in exec_simple_query (
query_string=0xb89a00 "SELECT w1.kivel, date_max(w1.mikor,w2.mikor),
w1.megnezes, u.* FROM valogatas_valasz w1, valogatas_valasz w2, useradat
u WHERE w1.ki=65549 AND not w1.del AND w2.kivel=65549 AND w1.megnezes=0
AND w1.ki"...) at postgres.c:991
#19 0x005ea977 in PostgresMain (argc=4, argv=, username=0xaceb10 "randir") at postgres.c:3614
#20 0x005bf2a8 in ServerLoop () at postmaster.c:3447
#21 0x005c0037 in PostmasterMain (argc=3, argv=0xac9820) at
postmaster.c:1040
#22 0x00569b48 in main (argc=3, argv=0xac9820) at main.c:188

(gdb) p debug_query_string
$1 = 0xb89a00 "SELECT w1.kivel, date_max(w1.mikor,w2.mikor),
w1.megnezes, u.* FROM valogatas_valasz w1, valogatas_valasz w2, useradat
u WHERE w1.ki=65549 AND not w1.del AND w2.kivel=65549 AND w1.megnezes=0
AND w1.ki"...


Thanks,

Daniel



Tom Lane wrote:
> Nagy Daniel  writes:
>> (gdb) p debug_query_string
>> $1 = 12099472
> 
> Huh, your stripped build is being quite unhelpful :-(.  I think
> "p (char *) debug_query_string" would have produced something useful.
> 
>   regards, tom lane

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5238: frequent signal 11 segfaults

2009-12-13 Thread Nagy Daniel
I ran "select * from" on both tables. All rows were returned
successfully, no error logs were produced during the selects.

However there are usually many 23505 errors in indices, like:
Dec 13 10:02:13 goldbolt postgres[21949]: [26-1]
user=randirw,db=lovehunter ERROR:  23505: duplicate key value violates
unique constraint "kepek_eredeti_uid_meret_idx"
Dec 13 10:02:13 goldbolt postgres[21949]: [26-2]
user=randirw,db=lovehunter LOCATION:  _bt_check_unique, nbtinsert.c:301

There are many 58P01 errors as well, like:
Dec 13 10:05:18 goldbolt postgres[7931]: [23-1] user=munin,db=lovehunter
ERROR:  58P01: could not open segment 1 of relation base/16
400/19856 (target block 3014766): No such file or directory
Dec 13 10:05:18 goldbolt postgres[7931]: [23-2] user=munin,db=lovehunter
LOCATION:  _mdfd_getseg, md.c:1572
Dec 13 10:05:18 goldbolt postgres[7931]: [23-3] user=munin,db=lovehunter
STATEMENT:  SELECT count(*) FROM users WHERE nem='t'

Reindexing sometimes helps, but the error logs appear again within
hours.

Recently a new error appeared:

Dec 13 03:46:55 goldbolt postgres[18628]: [15-1]
user=randir,db=lovehunter ERROR:  XX000: tuple offset out of range: 0
Dec 13 03:46:55 goldbolt postgres[18628]: [15-2]
user=randir,db=lovehunter LOCATION:  tbm_add_tuples, tidbitmap.c:286
Dec 13 03:46:55 goldbolt postgres[18628]: [15-3]
user=randir,db=lovehunter STATEMENT:  SELECT * FROM valogatas WHERE
uid!='16208' AND eletkor BETWEEN 39 AND 55 AND megyeid='1' AND
keresettnem='f' AND dom='iwiw.hu' AND appid='2001434963' AND nem='t'
ORDER BY random() DESC



If there is on-disk corruption, would a complete dump and
restore to an other directory fix it?

Apart from that, I think that pg shouldn't crash in case of
on-disk corruptions, but log an error message instead.
I'm sure that it's not that easy to implement as it seems,
but nothing is impossible :)


Regards,

Daniel


Tom Lane wrote:
> Nagy Daniel  writes:
>> Here's a better backtrace:
> 
> The crash location suggests a problem with a corrupted tuple, but it's
> impossible to guess where the tuple came from.  In particular I can't
> guess whether this reflects on-disk data corruption or some internal
> bug.  Now that you have (some of) the query, can you put together a test
> case?  Or try "select * from" each of the tables used in the query to
> check for on-disk corruption.
> 
>   regards, tom lane

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5238: frequent signal 11 segfaults

2009-12-14 Thread Nagy Daniel
I have pg segfaults on two boxes, a DL160G6 and a DL380g5.
I've just checked their memory with memtest86+ v2.11
No errors were detected.

We also monitor the boxes via IPMI, and there are no signs
of HW failures.

Regards,

Daniel


Tom Lane wrote:
> Nagy Daniel  writes:
>> I ran "select * from" on both tables. All rows were returned
>> successfully, no error logs were produced during the selects.
> 
> Well, that would seem to eliminate the initial theory of on-disk
> corruption, except that these *other* symptoms that you just mentioned
> for the first time look a lot like index corruption.  I concur with
> Pavel that intermittent hardware problems are looking more and more
> likely.  Try a memory test first --- a patch of bad RAM could easily
> produce symptoms like this.
> 
>> Apart from that, I think that pg shouldn't crash in case of
>> on-disk corruptions, but log an error message instead.
> 
> There is very little that software can do to protect itself from
> flaky hardware :-(
> 
>   regards, tom lane

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5238: frequent signal 11 segfaults

2009-12-15 Thread Nagy Daniel
I upgraded to 8.4.2, did a full reindex and vacuum (there were
no errors). But it segfaults as well:

Core was generated by `postgres: randir lovehunter 127.0.0.1(48268)
SELECT   '.
Program terminated with signal 11, Segmentation fault.
[New process 7262]
#0  slot_deform_tuple (slot=0xc0d3b8, natts=20) at heaptuple.c:1130
1130off = att_align_pointer(off, 
thisatt->attalign, -1,
(gdb) bt
#0  slot_deform_tuple (slot=0xc0d3b8, natts=20) at heaptuple.c:1130
#1  0x00453b9a in slot_getattr (slot=0xc0d3b8, attnum=20,
isnull=0x7fffe48130af "") at heaptuple.c:1253
#2  0x0054418c in ExecEvalNot (notclause=,
econtext=0x1dda4503, isNull=0x7fffe48130af "",
isDone=) at execQual.c:2420
#3  0x0054466b in ExecQual (qual=,
econtext=0xc17000, resultForNull=0 '\0') at execQual.c:4909
#4  0x0054ae55 in ExecScan (node=0xc16ef0, accessMtd=0x550b80
) at execScan.c:131
#5  0x00543da0 in ExecProcNode (node=0xc16ef0) at execProcnode.c:373
#6  0x00553516 in ExecHashJoin (node=0xc15dd0) at nodeHashjoin.c:598
#7  0x00543de8 in ExecProcNode (node=0xc15dd0) at execProcnode.c:412
#8  0x00556367 in ExecNestLoop (node=0xc14cc0) at nodeNestloop.c:120
#9  0x00543e18 in ExecProcNode (node=0xc14cc0) at execProcnode.c:404
#10 0x00556367 in ExecNestLoop (node=0xc12ee0) at nodeNestloop.c:120
#11 0x00543e18 in ExecProcNode (node=0xc12ee0) at execProcnode.c:404
#12 0x00556367 in ExecNestLoop (node=0xc110e0) at nodeNestloop.c:120
#13 0x00543e18 in ExecProcNode (node=0xc110e0) at execProcnode.c:404
#14 0x00557cc1 in ExecSort (node=0xc0eec0) at nodeSort.c:102
#15 0x00543d10 in ExecProcNode (node=0xc0eec0) at execProcnode.c:423
#16 0x005418b2 in standard_ExecutorRun (queryDesc=0xbb95e0,
direction=ForwardScanDirection, count=0) at execMain.c:1504
#17 0x005ed687 in PortalRunSelect (portal=0xbf7000,
forward=, count=0, dest=0x7f863746adb8)
at pquery.c:953
#18 0x005eea39 in PortalRun (portal=0xbf7000,
count=9223372036854775807, isTopLevel=1 '\001', dest=0x7f863746adb8,
altdest=0x7f863746adb8, completionTag=0x7fffe4813650 "") at pquery.c:779
#19 0x005e9e07 in exec_simple_query (
query_string=0xb82e10 "SELECT * FROM valogatas WHERE uid!='64708'
AND eletkor BETWEEN 40 AND 52 AND megyeid='9' AND keresettnem='t' AND
dom='iwiw.hu' AND appid='2001434963' AND nem='f' ORDER BY random()
DESC") at postgres.c:991
#20 0x005eb3d7 in PostgresMain (argc=4, argv=, username=0xacfe90 "randir") at postgres.c:3614
#21 0x005bfe18 in ServerLoop () at postmaster.c:3449
#22 0x005c0ba7 in PostmasterMain (argc=5, argv=0xacaa90) at
postmaster.c:1040
#23 0x0056a568 in main (argc=5, argv=0xacaa90) at main.c:188

(gdb) p (char *) debug_query_string
$1 = 0xb82e10 "SELECT * FROM valogatas WHERE uid!='64708' AND eletkor
BETWEEN 40 AND 52 AND megyeid='9' AND keresettnem='t' AND dom='iwiw.hu'
AND appid='2001434963' AND nem='f' ORDER BY random() DESC"

When I run this query manually, it works.

Regards,

Daniel



Tom Lane wrote:
> Nagy Daniel  writes:
>> I have pg segfaults on two boxes, a DL160G6 and a DL380g5.
>> I've just checked their memory with memtest86+ v2.11
>> No errors were detected.
>> We also monitor the boxes via IPMI, and there are no signs
>> of HW failures.
> 
> Hm.  Well, now that 8.4.2 is out, the first thing you ought to do is
> update and see if this happens to be resolved by any of the recent
> fixes.  (I'm not too optimistic about that, because it doesn't look
> exactly like any of the known symptoms, but an update is certainly
> worth your time in any case.)
> 
> If you still see it after that, please try to extract a reproducible
> test case.
> 
>   regards, tom lane

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5238: frequent signal 11 segfaults

2009-12-18 Thread Nagy Daniel
I don't know if it's related, but we often have index problems
as well. When performing full vacuum, many indexes contain
one more row versions than the tables:

WARNING:  index "iwiw_start_top_napi_fast_idx" contains 10932 row
versions, but table contains 10931 row versions
WARNING:  index "iwiw_start_top_napi_fast_idx" contains 10932 row
versions, but table contains 10931 row versions
WARNING:  index "iwiw_jatekok_ertek_idx" contains 17 row versions, but
table contains 16 row versions
WARNING:  index "ujtema_nehezseg_idx" contains 696 row versions, but
table contains 695 row versions

etc...

Daniel



Tom Lane wrote:
> Nagy Daniel  writes:
>> I have pg segfaults on two boxes, a DL160G6 and a DL380g5.
>> I've just checked their memory with memtest86+ v2.11
>> No errors were detected.
>> We also monitor the boxes via IPMI, and there are no signs
>> of HW failures.
> 
> Hm.  Well, now that 8.4.2 is out, the first thing you ought to do is
> update and see if this happens to be resolved by any of the recent
> fixes.  (I'm not too optimistic about that, because it doesn't look
> exactly like any of the known symptoms, but an update is certainly
> worth your time in any case.)
> 
> If you still see it after that, please try to extract a reproducible
> test case.
> 
>   regards, tom lane

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5238: frequent signal 11 segfaults

2009-12-19 Thread Nagy Daniel
More info: we disabled autovacuum (we do vacuuming via cron)
and the segfaults seem to be gone.

Daniel


Tom Lane wrote:
> Nagy Daniel  writes:
>> I have pg segfaults on two boxes, a DL160G6 and a DL380g5.
>> I've just checked their memory with memtest86+ v2.11
>> No errors were detected.
>> We also monitor the boxes via IPMI, and there are no signs
>> of HW failures.
> 
> Hm.  Well, now that 8.4.2 is out, the first thing you ought to do is
> update and see if this happens to be resolved by any of the recent
> fixes.  (I'm not too optimistic about that, because it doesn't look
> exactly like any of the known symptoms, but an update is certainly
> worth your time in any case.)
> 
> If you still see it after that, please try to extract a reproducible
> test case.
> 
>   regards, tom lane

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5253: installer fails to populate data directory

2009-12-22 Thread Daniel Convissor

The following bug has been logged online:

Bug reference:  5253
Logged by:  Daniel Convissor
Email address:  dani...@analysisandsolutions.com
PostgreSQL version: 8.4.2-1
Operating system:   Windows XP Pro SP3
Description:installer fails to populate data directory
Details: 

It seems Bug #4785 hasn't been resolved.  I downloaded the one click
installer tonight and have hit a snag.  Running
postgresql-8.4.2-1-windows.exe produces the following error:


Problem running post-install step.  Installation may not complete correctly.
 Error reading the C:/Program Files/PostgreSQL/8.4/data/postgresql.conf


Turns out the data directory is completely empty.  During the install, I did
_not_ un-check the box asking if I wanted to "Install pl/pgsql in template1
database", so it should have been in there as well.

This is a standalone computer, not on a domain.  I ran the installer while
logged into the Administrator account.  Both I and the pre-existing
"postgres" user[1] have permissions to the directory.

cacls "C:\Program Files\PostgreSQL\8.4\data"

C:\Program Files\PostgreSQL\8.4\data DANT41\postgres:(OI)(CI)C
 BUILTIN\Users:R
 BUILTIN\Users:(OI)(CI)(IO)(special
access:)
   GENERIC_READ
  
GENERIC_EXECUTE
 BUILTIN\Power Users:C
 BUILTIN\Power Users:(OI)(CI)(IO)C
 BUILTIN\Administrators:F
 BUILTIN\Administrators:(OI)(CI)(IO)F
 NT AUTHORITY\SYSTEM:F
 NT AUTHORITY\SYSTEM:(OI)(CI)(IO)F
 DANT41\theadmin:F
 CREATOR OWNER:(OI)(CI)(IO)F

NB: "theadmin" is the administrator user on this machine.

[1]  I had Postgres 8.1 installed a while ago.  Haven't used it in a while. 
I uninstalled it before attempting to install 8.4.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5253: installer fails to populate data directory

2009-12-23 Thread Daniel Convissor
Hi Sachin:

On Wed, Dec 23, 2009 at 02:19:41PM +0530, Sachin Srivastava wrote:
> Can you post the error part of the logs (%TEMP%\install-postgresql.log)?
> Mask out any passwords (if there are any).

I have attached it.  Examining the log reveals some clues that I used to 
manually populate the data directory...

cacls "C:\Program Files\PostgreSQL\8.4\data" /E /T /P postgres:F

cscript //NoLogo \
  "C:\Program Files\PostgreSQL\8.4/installer/server/initcluster.vbs" \
  "postgres" "postgres" "fakepassword" \
  "C:\Program Files\PostgreSQL\8.4" \
  "C:\Program Files\PostgreSQL\8.4\data" 5432 "DEFAULT"


I was then able to start the PostgreSQL service manually via the 
services.msc interface.  The service is being run as the "postgres" user. 
Then I created a user and a database in PostgreSQL without incident.

Please note, that while running the cscript test, it complained a bit.  
Here is the relevant portion of the output:

vv
Ensuring we can write to the data directory (using cacls):
The data is invalid.

The files belonging to this database system will be owned by user 
"theadmin".
This user must also own the server process.
^^


So, there seems to be a couple bugs in initcluster.vbs:

1) What does it mean it can't write to the directory?  Is the script 
having the "postgres" user attempt to set the permissions on "data" via 
cacls?  Of course it can't.  The script should have the admin user create 
"data" and run the cacls granting permission to the postgres user.  Then 
to test if "postgres" has the proper permissions, touch a file inside 
"data".  If that works, excellent.  If not, fall back to the admin having 
ownership.

2) Ownership should be given to "postgres".  Perhaps that's because of 
the prior issue.

Thanks,

--Dan

-- 
 T H E   A N A L Y S I S   A N D   S O L U T I O N S   C O M P A N Y
data intensive web and database programming
http://www.AnalysisAndSolutions.com/
 4015 7th Ave #4, Brooklyn NY 11232  v: 718-854-0335 f: 718-854-0409
Log started 12/23/09 at 02:21:22
Preferred installation mode : win32
Trying to init installer in mode win32
Mode win32 successfully initialized
Could not find registy key 
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-8.4 Service 
Account. Setting variable iServiceAccount to empty value
Could not find registy key 
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-8.4 Super User. 
Setting variable iSuperuser to empty value
Could not find registy key 
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-8.4 Branding. 
Setting variable iBranding to empty value
Could not find registy key 
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-8.4 iShortcut. 
Setting variable Shortcuts to empty value
Could not find registy key 
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-8.4 
DisableStackBuilder. Setting variable iDisableStackBuilder to empty value
[02:21:30] Existing base directory: C:\Program Files\PostgreSQL\8.4
[02:21:30] Existing data directory: C:\Program Files\PostgreSQL\8.4\data
[02:21:30] Using branding: PostgreSQL 8.4
[02:21:30] Using Super User: postgres and Service Account: postgres
[02:21:30] Using Service Name: postgresql-8.4
Executing cscript //NoLogo "C:\Documents and Settings\Administrator\Local 
Settings\Temp\postgresql_installer\installruntimes.vbs" "C:\Documents and 
Settings\Administrator\Local 
Settings\Temp\postgresql_installer\vcredist_x86.exe"
Script exit code: 0

Script output:
 Executing the runtime installer: C:\Documents and Settings\Administrator\Local 
Settings\Temp\postgresql_installer\vcredist_x86.exe
installruntimes.vbs ran to completion

Script stderr:
 

Executing C:\Documents and Settings\Administrator\Local 
Settings\Temp\postgresql_installer\getlocales.exe 
Script exit code: 0

Script output:
 AfrikaansxxCOMMASPxxSouthxxSPxxAfrica=Afrikaans, South Africa
[... snipped by Dan ...]
ZuluxxCOMMASPxxSouthxxSPxxAfrica=Zulu, South Africa

Script stderr:
 

Executing C:\Documents and Settings\Administrator\Local 
Settings\Temp\postgresql_installer\createuser.exe "." "postgres" "fakepassword"
Script exit code: 0

Script output:
 User account 'DANT41\postgres' already exists.
User 'DANT41\postgres' already member of 'Users' group.
C:\DOCUME~1\ADMINI~1\LOCALS~1\Temp\POSTGR~1\CREATE~1.EXE ran to completion

Script stderr:
 

Executing C:\Documents and Settings\Administrator\Local 
Settings\Temp\postgresql_installer\validateuser.exe "." "postgres" 
"fakepassword"
Script exit code: 0

Script output:
 C:\DOCUME~1\ADMINI~1\LOCALS~1\Temp\POSTGR~1\VALIDA~1.EXE ran to completion

Script stderr:
 

Preparing to Install
Creating directory C:\Program Files\PostgreSQL\8.4
[... snipped by Dan ...]
Unpacking C:\Program Files\PostgreSQL\8.4\symbols\zic.pdb
[02:25:36] Removing the existing ldconfig setting - set during the previous 
installation.
[02:25:36] Running the post-installation/upgrade actions:
[02:25:36] Delete the temporary scrip

[BUGS] Initdb Bug

2000-05-12 Thread Daniel Forsyth

Hi Guys,

Just got V7.0 and ran across a technical problem when doing initdb,
error log attached.

PS:  Thanks for one of the best databases in the world!

--
Dan



We are initializing the database system with username postgres (uid=100).
This user will own all the files and must also own the server process.

ERROR:  Error: unknown type 'oidvector'.

ERROR:  Error: unknown type 'oidvector'.

syntax error 12 : parse error

Creating template database in /usr/local/psql7.0/data/base/template1
initdb: could not create template database
initdb: cleaning up by wiping out /usr/local/psql7.0/data/base/template1



[BUGS] postgresql odbc bug

2002-04-18 Thread Daniel Peccini

Hi
Im using a PostgreSQL database on Slackware Linux Server.
I have some reports created with Crystal Reports 8.5 on WIMDOWS 98 that
retrieve data from database and communicate with database through
PostgreSQL ODBC Driver (last version).

These reports show numbers representing money.
In some cases, the cents values are showed with zeros.
Example:

$14,90 appear with $14,00

On database the value is $14,90

Then, i must reset the PC and to emit the report again. So, it is showed
correctly.

I send messages to foruns and newsgroups and i discover that ODBC Oracle
Driver presented this bug a few months ago. But, in the new version of
Oracle ODBC Driver, it was fixed.

Thanks for attention
Daniel

-- 
 Daniel Peccini
 Diretor de Projetos
 Mechatronic System Automação Ltda
 Rua Gal. João Telles 542, Conj 902
 Bairro Bom Fim - Porto Alegre
 Site:www.mechsys.com.br
 Fone/Fax:  51 3312 2826
 Cel.:  51 9947 9056

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[BUGS] BUG #2515: Full Scan with constant column

2006-07-06 Thread Daniel Naschenweng

The following bug has been logged online:

Bug reference:  2515
Logged by:  Daniel Naschenweng
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.0.7
Operating system:   Red Hat Linux 3.2.3-47.3
Description:Full Scan with constant column
Details: 

--- BEGIN CREATE CONTEXT ---
drop table tab1 cascade;
drop table tab2 cascade;

CREATE TABLE TAB1 (
TAB1_ID SERIAL CONSTRAINT PK_TAB1_ID PRIMARY KEY,
VALOR INTEGER
);

CREATE TABLE TAB2 (
TAB2_ID SERIAL CONSTRAINT PK_TAB2_ID PRIMARY KEY,
TAB1_ID INTEGER,
CONSTRAINT FK_TAB1_TAB2 FOREIGN KEY (TAB1_ID) REFERENCES TAB1 (TAB1_ID)
);

CREATE OR REPLACE FUNCTION POPULA_TAB ()
RETURNS NAME AS '
DECLARE
  I INTEGER;
BEGIN
   FOR i IN 1..10 LOOP
INSERT INTO TAB1 (TAB1_ID,VALOR) VALUES (I,I);
INSERT INTO TAB2 (TAB1_ID) VALUES (I);
   END LOOP;
 RETURN ''OK'';

END;
' language 'plpgsql';

SELECT POPULA_TAB();


--- END CREATE CONTEXT ---

/* Select Seq Scan on tab2: */

explain 
select t2.*
  FROM tab1 t1 LEFT OUTER JOIN (select tab2.*
   , 1 as coluna
   from tab2
) t2 on t1.tab1_id=t2.tab2_id
 WHERE t1.tab1_id=200;

   QUERY PLAN


 Nested Loop Left Join  (cost=0.00..3958.01 rows=1 width=12)
   Join Filter: ("outer".tab1_id = "inner".tab2_id)
   ->  Index Scan using pk_tab1_id on tab1 t1  (cost=0.00..6.01 rows=1
width=4)
 Index Cond: (tab1_id = 200)
   ->  Subquery Scan t2  (cost=0.00..2640.08 rows=104954 width=12)
 ->  Seq Scan on tab2  (cost=0.00..1590.54 rows=104954 width=8)
(6 rows)

/* Correct  plain on tab2: */

explain 
select t2.*
  FROM tab1 t1 LEFT OUTER JOIN (select tab2.*
   --, 1 as coluna
   from tab2
) t2 on t1.tab1_id=t2.tab2_id
 WHERE t1.tab1_id=200;



   QUERY PLAN


 Nested Loop Left Join  (cost=0.00..12.03 rows=1 width=8)
   ->  Index Scan using pk_tab1_id on tab1 t1  (cost=0.00..6.01 rows=1
width=4)
 Index Cond: (tab1_id = 200)
   ->  Index Scan using pk_tab2_id on tab2  (cost=0.00..6.01 rows=1
width=8)
 Index Cond: ("outer".tab1_id = tab2.tab2_id)
(5 rows)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[BUGS] Wishlist: Please ad a switch to dectivate the comments in the dump

2006-09-07 Thread Daniel Migowski
This would greatly reduce the effords for me do use a diff on two dumps 
to check for new stuff to insert. There has been a similar discussion 
some years ago, so i guess that I am not the only one who would benefit 
from this.


Thanks in advance,
Daniel Migowski
--

|¯¯|¯¯|Ingenieurbüro Daniel Migowski
|  |  |/|  K U H L M A N N   Mail: [EMAIL PROTECTED] <mailto:[EMAIL 
PROTECTED]>
|  | // |  Nordstr. 10   Tel.: 0441 21 98 89 52
|  | \\ |  26135 Oldenburg   Fax.: 0441 21 98 89 55
|__|__|\|  http://www.ikoffice.deMob.: 0176 22 31 20 76


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[BUGS] BUG #2798: Obsolete tiemzone data included

2006-12-01 Thread Daniel Baldoni

The following bug has been logged online:

Bug reference:  2798
Logged by:  Daniel Baldoni
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.2rc
Operating system:   Any
Description:Obsolete tiemzone data included
Details: 

G'day folks,

This may not qualify as a "bug", but the timezone data included with the
8.2rc tarballs is out of date.  Will the (at-the-time-of-release) "current"
version be included with the 8.2 production version?

Ciao.

PS: Yes, I realise keeping timezone data up-to-date is
an ongoing battle.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[BUGS] BUG #3289: SIN(PI()) expected to return 0, but returns garbage

2007-05-17 Thread Daniel Kastenholz

The following bug has been logged online:

Bug reference:  3289
Logged by:  Daniel Kastenholz
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1, 8.2.4
Operating system:   Linux, Windows
Description:SIN(PI()) expected to return 0, but returns garbage
Details: 

Trouble case:

Action:
Type in SELECT SIN(PI())

Expected output:
0

Actual output:
* Windows, using 8.2.4:
  garbage (-2.2..)

* Linux, using 8.1:
  garbage (1.22...)

--

Comments:
SIN(3.1414) and SIN(3.1417) come close to 0 and work on both platforms.
SIN(3.1415) and SIN(3.1416) produce the same garbage output as using PI() on
both platforms.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[BUGS] BUG #3548: When quickly switching between databases the server lags behind

2007-08-17 Thread Daniel Heyder

The following bug has been logged online:

Bug reference:  3548
Logged by:  Daniel Heyder
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.2.4
Operating system:   Linux (Red Hat EL4 Update 4 + PostgreSQL 8.2.4 update +
compat libraries)
Description:When quickly switching between databases the server lags
behind
Details: 

Hi,

when I do quick PQconnectdb give the connection something to do PQfinish the
connection and PQconnectdb to another database the database server does not
keep up, neither does PQconnectdb or PQfinish block until the work is
complete. This is annoying when I want to delete the still working database.
(Causes an error as it is still in use.)

Here some code which demonstrates the problem (make sure it is the only
process accessing the database):

#include 
#include 

int main()
{
 char *pq_db;
 char *tb_db;
 PGconn *conn;
 PGresult *res;
 
 for (int x = 0; x < 2000; x++)
 { 
  conn = PQconnectdb("host=127.0.0.1 dbname=postgres port=5432");
 
  pq_db = PQdb(conn);
  res = PQexec(conn, "SELECT * FROM pg_catalog.pg_stat_activity ORDER BY
usename, procpid;");
  tb_db = PQgetvalue(res, 0, 1);
  if (strcmp(pq_db, tb_db) != 0)
  {
   fprintf(stderr, "* ERROR WRONG DATABASE OPEN (pq=%s, tb=%s)
**\n", pq_db, tb_db);
   return 1;
  }
  PQclear(res);

  PQexec(conn, "CREATE TABLE x1 (x integer PRIMARY KEY, y integer,z
timestamp without time zone,u timestamp without time zone);");
  PQexec(conn, "CREATE TABLE x2 (x integer PRIMARY KEY, y integer,z
timestamp without time zone,u timestamp without time zone);");
  PQexec(conn, "CREATE TABLE x3 (x integer PRIMARY KEY, y integer,z
timestamp without time zone,u timestamp without time zone);");
  PQexec(conn, "CREATE TABLE x4 (x integer PRIMARY KEY, y integer,z
timestamp without time zone,u timestamp without time zone);");
  PQexec(conn, "CREATE TABLE x5 (x integer PRIMARY KEY, y integer,z
timestamp without time zone,u timestamp without time zone);");
  PQexec(conn, "CREATE TABLE x6 (x integer PRIMARY KEY, y integer,z
timestamp without time zone,u timestamp without time zone);");
  PQexec(conn, "CREATE TABLE x7 (x integer PRIMARY KEY, y integer,z
timestamp without time zone,u timestamp without time zone);");
  PQexec(conn, "CREATE TABLE x8 (x integer PRIMARY KEY, y integer,z
timestamp without time zone,u timestamp without time zone);");
  PQexec(conn, "CREATE TABLE x9 (x integer PRIMARY KEY, y integer,z
timestamp without time zone,u timestamp without time zone);");

  PQexec(conn, "DROP TABLE x1");
  PQexec(conn, "DROP TABLE x2");
  PQexec(conn, "DROP TABLE x3");
  PQexec(conn, "DROP TABLE x4");
  PQexec(conn, "DROP TABLE x5");
  PQexec(conn, "DROP TABLE x6");
  PQexec(conn, "DROP TABLE x7");
  PQexec(conn, "DROP TABLE x8");
  PQexec(conn, "DROP TABLE x9");

  PQfinish(conn);

  conn = PQconnectdb("host=127.0.0.1 dbname=template1 user=csntool
password=comsoft port=5432");
 
  pq_db = PQdb(conn);
  res = PQexec(conn, "SELECT * FROM pg_catalog.pg_stat_activity ORDER BY
usename, procpid;");
  tb_db = PQgetvalue(res, 0, 1);
  if (strcmp(pq_db, tb_db) != 0)
  {
   fprintf(stderr, "* ERROR WRONG DATABASE OPEN (pq=%s, tb=%s)
**\n", pq_db, tb_db);
   return 1;
  }
  PQclear(res);

  PQfinish(conn);
 }
 
 return 0;
}

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[BUGS] BUG #3808: Connections stays open in state CLOSE_WAIT

2007-12-07 Thread Daniel Migowski

The following bug has been logged online:

Bug reference:  3808
Logged by:  Daniel Migowski
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1.4
Operating system:   Windows Server 2003
Description:Connections stays open in state CLOSE_WAIT
Details: 

Hello dear bug report readers,

I am using PostgreSQL 8.1.4 on Windows 2003 Server and do my backups with
the pg_dump in the same package. 

>From time to time the server side of the connection remains open with
netstat showing the connection in state CLOSE_WAIT. The pg_dump process at
this time has succesfully done its job and is gone long, but the connection
stays open.

I notices this, because, together with the connection, many AccessShare
locks on different tables and one ExclusiveLock remains open.

It is possible to cancel the backend process from PGAdmin, but after
grinding the server to a halt with the most minor update script (it was a
DROP NOT NULL), I really get annoyed about this bug.

Even more problematic is, that the behaviour cannot be reproduced (i tried
for hours), it just occurs occacionally. 

I noticed the changelog of version 8.1.10 (Magnus: Windows Socket
Improvements). Will they fix this bug?

Do you need more information to fix this bug?

Thank you very much for taking your time to cover this in advance, and with
best regards,

Daniel Migowski

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[BUGS] BUG #3844: will not install

2007-12-29 Thread daniel birnbaum

The following bug has been logged online:

Bug reference:  3844
Logged by:  daniel birnbaum
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.2.5-1
Operating system:   windows vista
Description:will not install
Details: 

when i try to install it reports a error saying the account ** could not
be registerd/created

it then uninstalls everything

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [BUGS] BUG #3808: Connections stays open in stateCLOSE_WAIT

2008-01-07 Thread Daniel Migowski

Tom Lane schrieb:

"Dave Page" <[EMAIL PROTECTED]> writes:
  

From: Bruce Momjian <[EMAIL PROTECTED]>
Should we require the form to tell us their exact version number and
throw an error if it isn't the current one?
  


  

Past experience with other bug systems tells me we'll just end up with people 
selecting the lastest version to ensure the form is accepted, thus making it 
even harder for us to figure out what the problem really is.



Agreed, that's a pretty awful idea.  It's hard enough to get accurate
version information, even without creating an incentive for people to
lie.  Moreover, of those that are too honest to do that, a lot would
simply never file a report; thus keeping us from hearing about a problem
that might well still exist.

regards, tom lane
  
The problem seems to be fixed in the current version. Btw. if there 
hasn't been a similar bug report before, it is still okey to post it, 
for all others who will have the same problem again and can read in the 
mailing list if a solution is the upgrade. You now, never touch a 
running system, especially if it's running a highly used production site.


Btw. the update went without a glitch and was done in 5 Minutes, good 
work for that. But I didn't know that and asking before having to 
restore something for hours because of a failed update seems legitimate 
to me.


With best regards,
Daniel Migowski





[BUGS] BUG #4115: PostgreSQL ISO format is not really ISO

2008-04-19 Thread Daniel Ruoso

The following bug has been logged online:

Bug reference:  4115
Logged by:  Daniel Ruoso
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.3.1
Operating system:   Debian GNU/Linux lenny
Description:PostgreSQL ISO format is not really ISO
Details: 

ISO8601[1] defines Date/Time ouput, and is, today, quite accepted, being the
standard used by XML Schema definitions. Which means that they have to be in
that format to be accepted by a XML validator.

The basic difference between PostgreSQL format and the ISO format is the
absence of a "T" between the date and the time.

[1] http://en.wikipedia.org/wiki/ISO_8601

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #4115: PostgreSQL ISO format is not really ISO

2008-04-20 Thread Daniel Ruoso

Sáb, 2008-04-19 às 12:10 -0500, Jaime Casanova escreveu:
> """
> Unlike the previous examples, "2007-04-05 14:30" is considered two
> separate, but acceptable, representations—one for date and the other
> for time. It is then left to the reader to interpret the two separate
> representations as meaning a single time point based on the context.
> """

On the other hand, some important ISO8601-based specifications only
accept the dateTtime notation, for instance XML Schema.

As I was talking on #postgresql, I think it would be nice to have that
output option as one of the date/time output styles (name it ISO8601 or
ISO-strict), and it really doesn't need to be the default (the way pg
uses it now is nice for the human reader).

daniel




-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Deadlock condition in driver

2008-05-21 Thread Daniel Migowski

Hello dear developers,

I came across a deadlock condition in the JDBC driver that rises when 
very large queries, containing thousends of statements are send to the 
server with statement.execute(). I need this functionality, which work 
well with less than 6445 statements to send update scripts along with my 
application to the server.


The problem is the result of filled (and unemptied) TCP-Buffers. The 
driver takes all statements and sends them to the server with the 
extended query protocol. See this log snipped, which contains the last 
10 lines before the eternal halt of the test application below (repeats 
every 4 lines):


21:12:22.919 (1)  FE=> Parse(stmt=null,query="select 0",oids={})
21:12:22.919 (1)  FE=> Bind(stmt=null,portal=null)
21:12:22.919 (1)  FE=> Describe(portal=null)
21:12:22.919 (1)  FE=> Execute(portal=null,limit=0)
21:12:22.919 (1)  FE=> Parse(stmt=null,query="select 0",oids={})
21:12:22.919 (1)  FE=> Bind(stmt=null,portal=null)
21:12:22.919 (1)  FE=> Describe(portal=null)
21:12:22.919 (1)  FE=> Execute(portal=null,limit=0)
21:12:22.919 (1)  FE=> Parse(stmt=null,query="select 0",oids={})
21:12:22.919 (1)  FE=> Bind(stmt=null,portal=null)

Each statement sent to the server result in the following (yet still 
unreceived) answers send from the server to the client:


21:27:50.169 (1)  <=BE CommandStatus(SELECT)
21:27:50.169 (1)  <=BE ParseComplete [null]
21:27:50.169 (1)  <=BE BindComplete [null]
21:27:50.169 (1)  <=BE RowDescription(1)
21:27:50.169 (1)  <=BE DataRow

Since the driver is happy sending stuff, and the server happy answering 
it, after a while the clients TCP receive buffer is full, some millis 
later the servers TCP send buffer, some millies later the servers TCP 
receive buffer and then finally the client TCP send buffer. Increasing 
any of them delays the problem to a larger amount of statements.


When piping my script to psql, or sending it by PGAdmin there is no problem.

I suggest the following solution:

After sending 32kb (or the current send buffer size, 
Socket.getSendBufferSize()), the client checks for the amount of data in 
the receive buffer for every statement following. If its full (which 
means the server might already be blocked), there are two possiblities, 
from which the first is my favorite, and the second the option to 
choose, if the first is not supported on the platform.


1. The receive buffer is increased in 32kb steps (or the current send 
buffer size, or even 64k, taking the servers expected receive buffer 
into account, to). This would unblock the server and gives enough space 
for more (small) responses from the server. Afterwards the receive 
buffer should be trimmed to its original size to don't become trapped by 
decreased performance from large buffers. This method  might be a bit 
slower than the currently implemented one for cases in which  the 
answers of 32kb of statements would fit into the existing buffers, but 
you don't expect lightning speed in such situations anyway, and 
everything is better then being deadlocked in a production system.
2. We take all data available in the receive buffer already and place it 
into a local byte[] to make space for more data, and release blocking on 
the server side this way. This option might be a bit slower than the first.


A deadlock condition can yet be artificially constructed: If one sends a 
SELECT that sends large amounts of data from the server to the client, 
so the server is still not ready to process the next commands when we 
continue to bombard it with statements, there might still be a lock 
condition. But I cannot think of any use case where you have large 
quantities of statements like me, AND expect to receive large amounts of 
data which is discard anyway since there are following statements. So we 
can ignore this case, and happily send gig sized update scripts to the 
server in one call the driver :).


Currently my solution is to increase the TCP buffer size in a copy of 
the PGStream class, which shadows the original class in the driver jar 
(bad style...).


With best regards,
Daniel Migowski

- TEST APP 

import java.io.IOException;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Properties;

/**
* This class demonstrates the deadlock condition in the JDBC driver.
*
* Author: dmigowski
*/
public class DeadLockDemonstration {

   public static void main (String args[]) throws IOException, 
SQLException {

   DriverManager.registerDriver(new org.postgresql.Driver());
   Properties props = new Properties();
   props.setProperty("user","ik");
   props.setProperty("password","ik");
   props.setProperty("loglevel","2");
   Connection c = DriverManager.

[BUGS] pg_ctl -w option does not behave as documented

2002-07-22 Thread Daniel Alvarez




PostgreSQL 7.2.1 / SUSE Linux 
8.0
 
According to the output of "pg_ctl --help", 
the -w option (lowercase)
should force the frontend to wait until the postmaster's startup completes.
Instead it prompts for a password, as with the (-W) 
option (uppercase).
Ommiting the option, the frontend still blocks 
until the end of startup.
 
The behavior described is trivial and I won't 
bother you with 
unnecessary details.
For reproduction, invoke
 
   pg_ctl start -w -l somelog -o 
"-i" -D somecluster
 
and you'll be prompted for a password.
 
This was correctly implemented in version 7.1.3 and 
may require a
change in the invocing scripts when upgrading to 
version 7.2.1.


[BUGS] createdb rejects both relative and absolute paths with -D

2002-11-15 Thread daniel alvarez

After upgrading the database from 7.2 to 7.2.2 both relative and absolute
paths
are not accepted by either createdb or SQL-CREATE. AFAIK our provider used a
default built.

Using createdb I created databases in custom locations everal times before,
and it
worked ('datenbank/db_cluster' being the relative location of the
database_cluster);

This context-independent output illustrates the problem.

immoma=# create database import with location =
'/usr/local/httpd/htdocs/kunden/web41/datenbank/db_cluster';
ERROR:  Absolute paths are not allowed as database locations
immoma=# create database import with location = 'datenbank/db_cluster';
ERROR:  Relative paths are not allowed as database locations

The environment is as follows:

Operating System (uname -a):
Linux julius64 2.4.14 #34 Wed Jan 23 17:41:57 MET 2002 i686 unknown

PostgreSQL Version (psql -V)
psql (PostgreSQL) 7.2
(although it should be 7.2.2 according to our provider)



-- 
+++ GMX - Mail, Messaging & more  http://www.gmx.net +++
NEU: Mit GMX ins Internet. Rund um die Uhr für 1 ct/ Min. surfen!


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[BUGS] Problem with "z_" in Unicode-DB

2003-07-01 Thread Wehrle, Daniel
Hi,

i have following problem:

I created a database with encoding UNICODE

CREATE DATABASE elements
  WITH ENCODING = 'UNICODE';


In this db there is a table "tblassets"

CREATE TABLE public.tblassets
(
  id serial NOT NULL,
  uri text NOT NULL,
  CONSTRAINT tblassets_pkey PRIMARY KEY (id)
) WITHOUT OIDS;


When i try to execute this select, i get an error:

Statement: 
SELECT ID FROM tblAssets WHERE uri like
'/files/AssetStore/postgreSqlTest/ratz_ruebe.jpg' ;

Error:
ERROR:  Invalid UNICODE character sequence found (0xc000)


If I try this, there is no error:

SELECT ID FROM tblAssets WHERE uri like
'/files/AssetStore/postgreSqlTest/ratzruebe.jpg' ;
SELECT ID FROM tblAssets WHERE uri like
'/files/AssetStore/postgreSqlTest/raty_ruebe.jpg' ;

Has PostgreSQL a problem with the characters "z_"?

Platform is:
Redhat 8.0 (english)
PostgreSQL 7.3.3 (redhat binaries)
Client pgAdmin3

Regards, Daniel


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [BUGS] pg_dump -t option doesn't take schema-qualified table

2003-07-16 Thread Wehrle, Daniel
Hi Tom,

where can i find the -n option in pg_dump. Is there any special version? 
I have 7.3.3.

Regards,

Daniel

-Ursprungliche Nachricht-
Von: Tom Lane [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 1. Juli 2003 15:44
An: Curt Sampson
Cc: [EMAIL PROTECTED]
Betreff: Re: [BUGS] pg_dump -t option doesn't take schema-qualified
table names 


Curt Sampson <[EMAIL PROTECTED]> writes:
> It appears that the command "pgsql -t foo.bar" will not dump the table
> bar in the schema foo. I saw a patch a while back to add schema support
> to pg_dump (with the --namespace option), but I did not see a fix for
> this.

IMO that's not a bug; you should spell it pg_dump -n foo -t bar.
The other way is ambiguous with a table named "foo.bar".

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[BUGS] Unclear documentation (IMMUTABLE functions)

2003-09-05 Thread Daniel Schreiber
Hi,

A question on IMMUTABLE functions: do they only have to return the same value 
during a session/connection or during the databse server's lifetime? The 
documentation does not say anything about that.

I wrote a function that uses the CURRENT_USER session variable, and for my 
current usage it is very important, that it returns a different value for 
each database connection.
The current implementation is session, but it is quite important that this 
behaviour will be the same in future releases. If it must be the same during 
server lifetime, I had to rewrite the function to have the CURRENT_USER 
variable passed as parameter (that would not be a big issue, of course).

Thanks for your attention,

Daniel Schreiber
-- 
Daniel Schreiber | ICQ: 220903493
GPG encrypted Mail welcome! Key ID: 25A6B489
Chemnitzer Linux-Tag: 
http://www.tu-chemnitz.de/linux/tag/


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [BUGS] Unclear documentation (IMMUTABLE functions)

2003-09-05 Thread Daniel Schreiber
Am Freitag, 5. September 2003 16:01 schrieb Stephan Szabo:
> On Fri, 5 Sep 2003, Daniel Schreiber wrote:
> > Hi,
> >
> > A question on IMMUTABLE functions: do they only have to return the same
> > value during a session/connection or during the databse server's
> > lifetime? The documentation does not say anything about that.
>
> The function should always return the same value result the same inputs.
> I think you'll want to rewrite to take it as an argument (otherwise there
> might be uses that get converted to a constant where you want it to vary).
>
> This is the section in create function reference page about immutable. I'd
> thought it was clear, but do you have a better suggested wording?
>
>   
>IMMUTABLE indicates that the function always
>returns the same result when given the same argument values; that
>is, it does not do database lookups or otherwise use information not
>directly present in its parameter list.  If this option is given,
>any call of the function with all-constant arguments can be
>immediately replaced with the function value.
>   

The explanation that Tom gave, was quite clear I think.
What about:

IMMUTABLE indicates that the function always returns the 
same result when given the same argument values; that is, it does not do 
database lookups or otherwise use information not directly present in its 
parameter list.  If this option is given, any call at any time of the 
function with all-constant arguments can be immediately replaced with the 
function value. What you are basically controlling here is whether the 
planner will deem it safe to pre-evaluate a call of the function.


Thanks for the quick help,

Daniel
-- 
Daniel Schreiber | ICQ: 220903493
GPG encrypted Mail welcome! Key ID: 25A6B489
Chemnitzer Linux-Tag: 
http://www.tu-chemnitz.de/linux/tag/


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [BUGS] Unclear documentation (IMMUTABLE functions)

2003-09-05 Thread Daniel Schreiber
Am Freitag, 5. September 2003 17:05 schrieb Tom Lane:
> Daniel Schreiber <[EMAIL PROTECTED]> writes:
> > A question on IMMUTABLE functions: do they only have to return the same
> > value during a session/connection or during the databse server's
> > lifetime?
>
> Lifetime, I'd say, since among other things IMMUTABLE is considered to
> mean that it's okay to build an index on the function's results.
>
> Of course, you can cheat: the system makes no attempt to verify whether
> a function that you say is IMMUTABLE really is.  There are cases where
> it's useful to lie about the mutability property of a function.  What
> you are basically controlling here is whether the planner will deem it
> safe to pre-evaluate a call of the function.  Unless you actually do
> build a functional index using a function, I doubt it could matter
> whether the results change from one session to the next.

Okay, thanks, I think I got it now.

> > I wrote a function that uses the CURRENT_USER session variable, and for
> > my current usage it is very important, that it returns a different value
> > for each database connection.
>
> CURRENT_USER is marked STABLE, not IMMUTABLE.  Not sure what your point
> is here.

Okay, thanks for the help. I will rewrite the function as STABLE. Btw: is 
there documentation which parts in the environment are STABLE/IMMUTABLE?

Thanks,
Daniel Schreiber
-- 
Daniel Schreiber | ICQ: 220903493
GPG encrypted Mail welcome! Key ID: 25A6B489
Chemnitzer Linux-Tag: 
http://www.tu-chemnitz.de/linux/tag/


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [BUGS] Date 1973/06/03 Conversion Problem in 7.3.4 and 7.3.2.

2003-09-10 Thread Daniel Schreiber
Am Mittwoch, 10. September 2003 12:52 schrieb Torello Querci:
> Hi to all,
>
> I have a problem storing 1973/06/03 date.
>
> If I send this statement
>
> select to_date('03/06/1973','dd/mm/');
>
> in the psql interface I obtain
>
>  to_date
> 
>  1973-06-02
>
> I test this statement with Postgres 7.3.2 and 7.3.4 packaged withMandrake
> 9.1 and Mandrake 9.2RC1 and obtain the same result.
>
> Can anyone help me?

Could be Mandrake or compiler problem.
# select version();
version
---
 PostgreSQL 7.3.2 on i386-pc-linux-gnu, compiled by GCC 2.95.4
(1 row)

# select to_date('03/06/1973','dd/mm/');
  to_date
----
 1973-06-03
(1 row)

This is on Debian woody with backported postgres from testing.

HTH,
Daniel
-- 
Daniel Schreiber | ICQ: 220903493
GPG encrypted Mail welcome! Key ID: 25A6B489
Chemnitzer Linux-Tag: 
http://www.tu-chemnitz.de/linux/tag/


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[BUGS] Bug? (geo operator @ )

2003-12-22 Thread Daniel Lau
Hi all,

I was to extract the '_lines' (path) if the lines are contained or on by a
box-like polygon. The result turns wrong if the box-like polygon is
changed from a square (x_max = y_max) to a rectangle (x_max <> y_max).
Please read the example at the end.

Please also notice that although the _lines #2 is contained by both square
and rectangle, it does not appear in both table. 

I am wondering if it is a bug. Perhapes I did something wrong?

regards,
Daniel Lau
Hong Kong University of Science and Technology


Example:
_

test=# select * from test;
 geoid |_lines
---+---
 0 | ((0,0),(1,1))
 1 | ((1,1),(0,0))
 2 | ((833749,820999),(844362,821001))
 3 | ((100,100),(101,101))
 4 | ((0,0),(1,1),(3,-1))
(5 rows)
 
test=# select * from test  where polygon(pclose(_lines)) @ polygon
'(-2,-2),(85,85)';
 geoid |_lines
---+---
 0 | ((0,0),(1,1))
 1 | ((1,1),(0,0))
(2 rows)
 
test=# select * from test  where polygon(pclose(_lines)) @ polygon
'(-2,-2),(84,85)';
 geoid | _lines
---+
(0 rows)



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [BUGS] Bug? (geo operator @ )

2003-12-28 Thread Daniel Lau
Thank you for answering my question.
I made a careless mistake.

regards,
Daniel

On Mon, 22 Dec 2003, Tom Lane wrote:

> Daniel Lau <[EMAIL PROTECTED]> writes:
> > I was to extract the '_lines' (path) if the lines are contained or on by a
> > box-like polygon. The result turns wrong if the box-like polygon is
> > changed from a square (x_max = y_max) to a rectangle (x_max <> y_max).
> 
> The examples you give are not "box-like"; they are in fact just line
> segments.  You need four points to define a box-like polygon.
> 
> AFAICS the quoted example is correct behavior.
> 
>   regards, tom lane
> 


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [BUGS] abnormal behaviour of a CHECK CONSTRAINT with a regular

2004-02-11 Thread Daniel Struck
> Correct.  It is annoying in some cases, but if the input to the trigger
> isn't of the column datatype, then what type is it?  It's hard to see
> how that could work in general.
> 
> If you want, say, a varchar length constraint to be checked only after
> the trigger runs, I'd counsel declaring the column as plain text and
> writing the length test as a CHECK constraint.

I had just an example, where it would have been convenient, if one could change the 
type.

Let's say you want to give the users the possibility to enter incomplete dates (year, 
year&month or the complete date). Indeed you could not insert this information as it 
is. But later on for stats you would like to be able to treat the column as a date.

I have implemented it by storing the user information in a varchar column and 
inserting the whole date in another column.
(if the user only provides the year, the date is inserted as 16.05.year,...).


Kind regards,

Daniel Struck

-- 
Retrovirology Laboratory Luxembourg
Centre Hospitalier de Luxembourg
4, rue E. Barblé
L-1210 Luxembourg

phone: +352-44116105
fax:   +352-44116113
web: http://www.retrovirology.lu
e-mail: [EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [BUGS] Optimizer problem with subselect.c?

2004-02-28 Thread Daniel O'Neill
On Fri, 27 Feb 2004 00:57:36 -0500
Tom Lane <[EMAIL PROTECTED]> wrote:

> "Daniel O'Neill" <[EMAIL PROTECTED]> writes:
> > Anyway, here's the vitals, including our layout:
> 
> Could I trouble you to provide those table and view definitions as an
> SQL script?  (pg_dump -s will help you.)  I'm too short of time to
> manually convert your \d listings into something executable.
> 
>   regards, tom lane
> 

Hmm, the design isn't very 'pick aparty', would it be kosher of me to send you the 
full schema to work with?  I can email it directly.

Thanks,
--Daniel

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[BUGS] abnormal behaviour of a CHECK CONSTRAINT with a regular expression when a trigger is added

2004-05-16 Thread Daniel Struck
reg1 (date_user) VALUES ('111.1999');
ERROR:  invalid input syntax for type date: "111.1999"
CONTEXT:  PL/pgSQL function "function_test_reg1" line 8 at assignment
retrovirology=# INSERT INTO test_reg1 (date_user) VALUES ('11:1999');
ERROR:  invalid input syntax for type date: "15.11:1999"
CONTEXT:  PL/pgSQL function "function_test_reg1" line 12 at assignment
retrovirology=#

This values do pass NOW the CHECK CONSTRAINT  and are injected in the trigger 
function. Normally the trigger shouldn't m have an influence an the CHECK CONSTRAINT.
Very strange?

Of course the trigger function now rejects the values, because they don't constitute a 
valid date.


Best regards,
Daniel Struck


-- 
Retrovirology Laboratory Luxembourg
Centre Hospitalier de Luxembourg
4, rue E. Barblé
L-1210 Luxembourg

phone: +352-44116105
fax:   +352-44116113
web: http://www.retrovirology.lu
e-mail: [EMAIL PROTECTED]

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [BUGS] abnormal behaviour of a CHECK CONSTRAINT with a regular

2004-05-17 Thread Daniel Struck
> I don't see any bug here; it's just that CHECK constraints are applied
> after any BEFORE triggers are run.  Since the triggers could change the
> data to be inserted, the reverse order would be insecure.


Ups, it did make some false assumptions.

Thank you for the clarification.


If I understood now correctly, first the type, then the trigger and last the check 
constraint is checked.

This makes it also impossible to change the type of a value by a trigger?

Say I would want to change the input of a user to a date, this wouldn't be possible, 
because in first instance the type is not a date.

Here is an example I tried out:

CREATE TABLE test(datum DATE);

CREATE OR REPLACE FUNCTION function_test()
RETURNS trigger
AS 'BEGIN

new.datum := (''15.06.'' || new.datum)::date;

RETURN new;
END;'
LANGUAGE plpgsql;

CREATE TRIGGER trigger_test
  BEFORE INSERT OR UPDATE
  ON test FOR EACH ROW
  EXECUTE PROCEDURE function_test();

INSERT INTO test VALUES('2003');


Best regards,
Daniel

-- 
Retrovirology Laboratory Luxembourg
Centre Hospitalier de Luxembourg
4, rue E. Barblé
L-1210 Luxembourg

phone: +352-44116105
fax:   +352-44116113
web: http://www.retrovirology.lu
e-mail: [EMAIL PROTECTED]

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[BUGS] Optimizer problem with subselect.c?

2004-06-08 Thread Daniel O'Neill
776027,
  RI_ConstraintTrigger_776035,
  RI_ConstraintTrigger_776036,
  RI_ConstraintTrigger_776044,
  RI_ConstraintTrigger_776045,
  RI_ConstraintTrigger_776053,
  RI_ConstraintTrigger_776054,
  RI_ConstraintTrigger_776056,
  RI_ConstraintTrigger_776057,
  RI_ConstraintTrigger_776083,
  RI_ConstraintTrigger_776084,
  RI_ConstraintTrigger_776107,
  RI_ConstraintTrigger_776108,
  RI_ConstraintTrigger_776205,
  RI_ConstraintTrigger_776209,
  RI_ConstraintTrigger_776210,
  RI_ConstraintTrigger_776263,
  RI_ConstraintTrigger_776264,
  RI_ConstraintTrigger_776272,
  RI_ConstraintTrigger_776273,
  RI_ConstraintTrigger_776281,
  RI_ConstraintTrigger_776282,
  RI_ConstraintTrigger_776290,
  RI_ConstraintTrigger_776291,
  RI_ConstraintTrigger_776293,
  RI_ConstraintTrigger_776294,
  RI_ConstraintTrigger_776320,
  RI_ConstraintTrigger_776321,
  RI_ConstraintTrigger_776344,
  RI_ConstraintTrigger_776345




Table "roaming_site_session"
   Column|   Type   | Modifiers
-+--+
 username| text | not null
 site_session_sshash | character varying(64)| not null
 createtime  | timestamp with time zone | default ('now'::text)::timestamp(6) 
with time zone
Indexes: roaming_sshash_idx
Triggers: RI_ConstraintTrigger_776112,
  RI_ConstraintTrigger_776349
 



 Table "login_site_session"
   Column|   Type   | Modifiers
-+--+
 login_id| integer  | not null
 site_session_sshash | character varying(64)| not null
 createtime  | timestamp with time zone | default ('now'::text)::timestamp(6) 
with time zone
Indexes: login_sshash_idx
Triggers: RI_ConstraintTrigger_776106,
  RI_ConstraintTrigger_776109,
  RI_ConstraintTrigger_776343,
  RI_ConstraintTrigger_776346
 
---

And I think that's all of them.  Most of the relevant data is in the original view 
(connection_info).  Also, please pardon the obfuscation of the database, it's the 
softwares' fault!

Thanks,
--Daniel F. O'Neill
fatport.com


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [BUGS] Error in 8.0 rc5 with repeat calls to array operator

2005-01-17 Thread Daniel Ceregatti




This doesn't happen on a fresh initdb. In fact, it doesn't happen even
directly after I load data. It might take a day or two, but eventually
the problem manifests itself. I can reproduce it 100% of the time now.
I've noticed this on 8rc1-5. Josh and I will get you a gdb trace ASAP.

Daniel

Tom Lane wrote:

  Josh Berkus  writes:
  
  
However, we're found that if you run thousands of empty array comparisons in a 
few seconds/minutes, eventually the empty array comparison breaks, and you 
get:
ERROR:  cache lookup failed for function 0

  
  
I tried

while true; do echo "select '{}'::INT[] = '{}'::INT[];" ; done | psql regression >/dev/null

on a couple different platforms, with and without INTARRAY (which
isn't invoked by this query anyway, so I doubt it matters).  I haven't
seen any misbehavior yet.

If you can reproduce this, please attach to the backend with gdb, set a
breakpoint at errfinish(), and get a stack traceback from the point of
the error.

			regards, tom lane
  





[BUGS] Errors using Postgres 7.4.6

2005-03-01 Thread Daniel Agut








Hello,

 

 

We are developing under the following platform:

 


 Postgres 7.4.6
 Tomcat 4.1
 Suse Linux 9.2
 Hibernate


 

We are obtaining some problems with the application.  We
get the next message in the postgres log which makes that the application fails:

 


 ERROR:  invalid
 string enlargement request size 1358954492


 

Other times, is Tomcat which writes the following
errors to its log:

 


 Unknown Response
 Type


 

Does it exist any incompatibility between Postgres
7.4.6 and the other technologies we are using?

 

Thanks for your help.

 

Regards,

 

Daniel Agut Rebollo
Departamento de eBusiness Solutions
DAVINCI Consulting Tecnológico

Tel.: +34  93.594.67.00 (Ext. 303)
Móvil: +34  630.62.03.07

Fax: +34  93.580.28.93
Email: [EMAIL PROTECTED]
Url: http://www.dvc.es

 








[BUGS] BUG #1529: Psql doesnt ask for passowrd

2005-03-07 Thread Daniel Wilches

The following bug has been logged online:

Bug reference:  1529
Logged by:  Daniel Wilches
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.0.1
Operating system:   Windows XP
Description:Psql doesnt ask for passowrd
Details: 

Hi,
When I just installed Postgres everything went OK,
when using:
psql -U postgres template1
I was prompted for password. But from one day,
psql diodnt ask for password anymore (and for noone user) even with the -W
flag. It just reject the conection.

I checked pg_hba and postgres is doing MD5 auth. When i turn this to TRUST,
then logins are allowed, but if again  I turn this to MD5 or password, then
reject the connection without asking for password.

For the moment Im using TRUSTed "authentication" cause
I need to login, but this is no good idea :S

Thanx, please tell me if its a bug or a
"replace-human-please-error" :)

Bye, Daniel

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[BUGS] BUG #1892: pg_dumpall get five passwords

2005-09-19 Thread Daniel Naschenweng

The following bug has been logged online:

Bug reference:  1892
Logged by:  Daniel Naschenweng
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1 beta2
Operating system:   Windows
Description:pg_dumpall get five passwords
Details: 

The version beta 2 get five passwords.

C:\Documents and Settings\daniel>pg_dumpall -U postgres > teste.dmp
Password:
Password:
Password:
Password:

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[BUGS] BUG #1893: pg_dumpall - Search path order

2005-09-19 Thread Daniel Naschenweng

The following bug has been logged online:

Bug reference:  1893
Logged by:  Daniel Naschenweng
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1 beta2
Operating system:   Windows
Description:pg_dumpall - Search path order
Details: 

When PostgreSQL make pg_dumpall, it set the search_path before a schema
create.
The import error:
NOTICE:  schema "usuario1" does not exist

--
-- PostgreSQL database cluster dump
--

\connect postgres

--
-- Roles
--

CREATE ROLE usuario1 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB
LOGIN;
ALTER ROLE usuario1 SET search_path TO usuario1, public, pg_catalog;
CREATE ROLE usuario2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB
LOGIN;
ALTER ROLE usuario2 SET search_path TO usuario2, public, pg_catalog;
CREATE ROLE usuario3 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB
LOGIN;
ALTER ROLE usuario3 SET search_path TO usuario3, public, pg_catalog;

(...)

--
-- Name: usuario1; Type: SCHEMA; Schema: -; Owner: usuario1
--

CREATE SCHEMA usuario1;

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[BUGS] BUG #2260: PGCrypto Memory Problem

2006-02-15 Thread Daniel Blaisdell

The following bug has been logged online:

Bug reference:  2260
Logged by:  Daniel Blaisdell
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1.2
Operating system:   Gentoo Linux K:2.6.9
Description:PGCrypto Memory Problem
Details: 

Prereq: 
PGCrypto

Table Setup:
employeeid integer
salt text
md5password text

Problem Query:
select * from table where md5password = crypt('password',salt)

The first time this query is run, I see the postgres process bump up to 8MB
of ram from where it initializes.

On subsequent issues of the same query the postgres's process memory
footprint grows each time.

Initial Memory Usage (from Top)
13463 postgres  17   0 17556 4716  15m S  0.0  0.5   0:00.00 postgres:
postgres fh_dev [local] idle
Initial RSS: 4716


After 1st Query Run:
13570 postgres  16   0 91120  78m  15m S  0.0  8.8   0:01.22 postgres:
postgres fh_dev [local] idle
RSS: 78M

After 2nd Query Run:
13570 postgres  16   0  160m 149m  15m S  0.0 17.0   0:02.60 postgres:
postgres fh_dev [local] idle
RSS: 149M

After 3rd Query Run:
13570 postgres  16   0  232m 221m  15m S 30.9 25.1   0:03.83 postgres:
postgres fh_dev [local] idle
RSS: 232M

4th Query Run:
RSS: 293M

And so on and so forth until all swap space is eaten up.

Hope someone knows what's going on here, i'd love to be able to use the
pgcrypto contribs in production.

-Daniel

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [BUGS] BUG #2260: PGCrypto Memory Problem

2006-02-16 Thread Daniel Blaisdell
I appreciate you guys looking at this bug. Taking Tom's suggestion that
it might be a system crypt implementation issue I upgraded OpenSSL from
0.9.7e to 0.9.7i. I also upgraded any other libraries that were
installed with the word crypt.

After running ldconfig I then recompiled Postgres 8.1.2 againt the
newly installed libraries and ended up with the same results. 
Michael's standalone testcase was blowing up the memory usage very
quickly. 

I did notice as I was doing more testing that if i disconnect my client
after running subsequent queries that the memory usage drops due to the
server process getting killed.

Other Possibly Useful info:
CFLAGS="-O3 -march=pentium4 -pipe"
MAKEOPTS="-j3"

I'm going to try compiling with the -ssl USE flag set to avoid any
external libraries and attempt to duplicate this bug. I'll let you know
what results I find.

-Daniel

On 2/15/06, Michael Fuhr <[EMAIL PROTECTED]> wrote:
On Wed, Feb 15, 2006 at 01:43:18PM -0500, Tom Lane wrote:> Michael Fuhr <[EMAIL PROTECTED]> writes:> > I can reproduce this in 8.1.3 on FreeBSD 6.0 and Solaris 9.  Here's
> > a standalone test case:>> > SELECT crypt(x::text, '$1$salt') FROM generate_series(1, 500) AS g(x);>> Interesting, because I see no leak with this example on Fedora 4 or> HPUX.  Platform dependency is sounding more and more likely.
Did you test OpenSSL builds?  Both of my systems are built withOpenSSL and that causes pgcrypto to use different code in someplaces (e.g., px_find_digest() in internal.c and openssl.c).  I'llbuild and test a non-OpenSSL version when I get a chance.
--Michael Fuhr


[BUGS] BUG #5548: ERROR: invalid attnum ## for rangetable entry on EXPLAIN VERBOSE, not on EXPLAIN

2010-07-08 Thread Daniel Grace

The following bug has been logged online:

Bug reference:  5548
Logged by:  Daniel Grace
Email address:  dgr...@wingsnw.com
PostgreSQL version: 9.0beta2
Operating system:   Windows XP 32-bit
Description:ERROR: invalid attnum ## for rangetable entry  on
EXPLAIN VERBOSE, not on EXPLAIN
Details: 

I apologize for not including detailed schema information.  It took a lot to
get this to reduce to the point it did, and hopefully this is enough
information to find a bug.  If not, email me back and I'll see how much
schema information I can provide.

When using EXPLAIN [ANALYZE] VERBOSE on the below query:

SELECT t.*
FROM (
SELECT 
TRUE AS is_enrolled,

acal.weight::REAL / (SELECT SUM(acal2.weight) FROM 
allocation_calendar AS
acal2 WHERE acal.year=acal2.year)::REAL AS calc_weight,
(
TRUNC(EXTRACT(EPOCH FROM LENGTH(
PERIOD(
GREATEST(FIRST(acal.daterange), 
acd.tstime),
LEAST(NEXT(acal.daterange), 
FIRST_VALUE(acd.tstime) OVER nextdate)
)
))) / TRUNC(EXTRACT(EPOCH FROM LENGTH(acal.daterange)))
) AS calc_duration,

NULL::integer AS group_id
FROM
allocation_calculated_dates AS acd
INNER JOIN allocation_calendar AS acal ON acd.acalid=acal.id
INNER JOIN log_status AS ls ON ls.sid=acd.sid AND ls.tsrange ~ 
acd.tstime
WINDOW
nextdate AS ( PARTITION BY acd.sid, acd.acalid ORDER BY 
acd.tstime ASC
ROWS BETWEEN 1 FOLLOWING AND 1 FOLLOWING )
) AS t
CROSS JOIN yearinfo AS yi  -- ON t.year=yi.year
LEFT JOIN group_info AS gi ON gi.id=t.group_id
WHERE t.is_enrolled /* AND yi.allocation_lock=0 */

I receive the following result:

ERROR: invalid attnum 5 for rangetable entry t

This appears to be originating from get_rte_attribute_name() in
parse_relation.c (which might need to consider RTE_SUBQUERY?) 

Stripping the final WHERE clause out (WHERE t.is_enrolled) causes the error
to go away, as do most modifications to joined tables.

When ran as a regular SELECT, the query runs fine and produces correct
outputs.

A regular EXPLAIN yields:
"Nested Loop  (cost=1136.45..1146.96 rows=16 width=17)"
"  ->  Subquery Scan on t  (cost=1136.45..1145.71 rows=4 width=17)"
"Filter: t.is_enrolled"
"->  WindowAgg  (cost=1136.45..1145.63 rows=8 width=38)"
"  ->  Sort  (cost=1136.45..1136.47 rows=8 width=38)"
"Sort Key: s.id, wings_demo.allocation_calendar.id,
(GREATEST(first(ls.tsrange), first(CASE WHEN
(wings_demo.allocation_calendar.countdate IS NULL) THEN
wings_demo.allocation_calendar.daterange ELSE
period_cc((wings_demo.allocation_calendar.countdate)::timestamp with time
zone, (wings_demo.allocation_calendar.countdate)::timestamp with time zone)
END)))"
"->  Hash Join  (cost=1055.63..1136.33 rows=8
width=38)"
"  Hash Cond: (ls.sid = s.id)"
"  Join Filter: (ls.tsrange ~
(GREATEST(first(ls.tsrange), first(CASE WHEN
(wings_demo.allocation_calendar.countdate IS NULL) THEN
wings_demo.allocation_calendar.daterange ELSE
period_cc((wings_demo.allocation_calendar.countdate)::timestamp with time
zone, (wings_demo.allocation_calendar.countdate)::timestamp with time zone)
END"
"  ->  Seq Scan on log_status ls  (cost=0.00..76.26
rows=1126 width=20)"
"  ->  Hash  (cost=1055.56..1055.56 rows=6
width=38)"
"->  Hash Join  (cost=1055.07..1055.56
rows=6 width=38)"
"  Hash Cond:
(wings_demo.allocation_calendar.id = acal.id)"
"  ->  HashAggregate 
(cost=1053.93..1054.11 rows=18 width=46)"
"->  Append 
(cost=561.01..1053.75 rows=18 width=46)"
"  ->  Merge Left Join 
(cost=561.01..596.61 rows=17 width=46)"
"Merge Cond: ((s.id
= ao.sid) AND (wings_demo.allocation_calendar.year = ao.year))"
"Filter: (ao.amount
IS NULL)"
"->  Sort 
(cost=472.83..481.28 rows=3378 width=46)"
"  Sort Key:
s.id, wings_demo.allocation_calendar.year"
"  ->  Nested
Loop  (cost=30.02..274.85 rows=3378 width=46)"
"  

Re: [BUGS] BUG #5548: ERROR: invalid attnum ## for rangetable entry on EXPLAIN VERBOSE, not on EXPLAIN

2010-07-09 Thread Daniel Grace
On Thu, Jul 8, 2010 at 10:52 PM, Tom Lane  wrote:
> "Daniel Grace"  writes:
>> I apologize for not including detailed schema information.  It took a lot to
>> get this to reduce to the point it did, and hopefully this is enough
>> information to find a bug.
>
> It is not.  You haven't provided anywhere near enough information
> for someone else to reproduce the failure.  We're not going to
> guess at the tables or views that underlie your query ...
>
>                        regards, tom lane
>

So I've spent the greater portion of the last two hours trying to slim
down the schema and query enough to provide something that can
reproduce this.

While I can reproduce it 100% of the time with live data, I can't get
it to reproduce at all with test data -- though I've included a few
schemas below.  It seems to be based on what plan ends up being
constructed for the query.

This doesn't really affect me at this point -- but my concern is that
it might cause actual problems when paired with the auto_explain
contrib module.

Basic stripped-down schema:
DROP SCHEMA IF EXISTS test CASCADE;
CREATE SCHEMA test;
SET SEARCH_PATH=test,public;

CREATE TABLE allocation_calendar
(
  id serial NOT NULL,
  "year" smallint NOT NULL,
  "name" character varying(64) NOT NULL,
  countdate date,
  availabledate date NOT NULL,
  weight integer NOT NULL,
  daterange integer NOT NULL,   -- Was a PERIOD, but not required to reproduce
  CONSTRAINT allocation_calendar_pkey PRIMARY KEY (id),
  CONSTRAINT allocation_calendar_ux_year UNIQUE (year, name)
);
CREATE INDEX allocation_calendar_ix_year_3 ON allocation_calendar
(year, countdate);
CREATE INDEX allocation_calendar_ix_year_4 ON allocation_calendar
(year, availabledate);

CREATE TABLE yearinfo (
  id serial NOT NULL,
  year smallint NOT NULL,
  CONSTRAINT yearinfo_ux_year UNIQUE (year)
);

INSERT INTO yearinfo (year) SELECT * FROM GENERATE_SERIES(1000, 2000);
INSERT INTO allocation_calendar (year, name, countdate, availabledate,
weight, daterange) SELECT f.v, 'Year ' || f.v, NULL, '-infinity', 1, 2
FROM GENERATE_SERIES(1000, 5000) AS f(v);

ANALYZE allocation_calendar;
REINDEX TABLE yearinfo;
REINDEX TABLE allocation_calendar;

CREATE OR REPLACE VIEW allocation_calculated_dates AS
SELECT acal.id AS acalid, acal.year AS year, null::integer AS tstime
FROM
allocation_calendar AS acal
;

EXPLAIN ANALYZE SELECT t.*
FROM (
SELECT
FIRST_VALUE(acd.tstime) OVER nextdate AS foo
FROM
allocation_calculated_dates AS acd
INNER JOIN allocation_calendar AS acal ON acd.acalid=acal.id
WINDOW
nextdate AS ( PARTITION BY acd.acalid ORDER BY acd.tstime ASC 
ROWS
BETWEEN 1 FOLLOWING AND 1 FOLLOWING )
) AS t
CROSS JOIN yearinfo AS yi  -- ON t.year=yi.year
WHERE t.foo IS NULL



Plan that fails on EXPLAIN VERBOSE:
"Nested Loop  (cost=0.00..132.35 rows=24 width=4)"
"  ->  Subquery Scan on t  (cost=0.00..131.00 rows=6 width=4)"
"Filter: (t.foo IS NULL)"
"->  WindowAgg  (cost=0.00..119.50 rows=1150 width=4)"
"  ->  Merge Join  (cost=0.00..102.25 rows=1150 width=4)"
"Merge Cond: (acal.id = acal.id)"
"->  Index Scan using allocation_calendar_pkey on
allocation_calendar acal  (cost=0.00..42.50 rows=1150 width=4)"
"->  Index Scan using allocation_calendar_pkey on
allocation_calendar acal  (cost=0.00..42.50 rows=1150 width=4)"
"  ->  Materialize  (cost=0.00..1.06 rows=4 width=0)"
"->  Seq Scan on yearinfo yi  (cost=0.00..1.04 rows=4 width=0)"
--> ERROR:  invalid attnum 2 for rangetable entry t
Note: The attnum in question always seems to be 1 more than the number
of columns in t.


Plan that succeeds on EXPLAIN VERBOSE:
"Nested Loop  (cost=0.00..827.88 rows=20020 width=4) (actual
time=0.036..2566.818 rows=4005001 loops=1)"
"  ->  Seq Scan on yearinfo yi  (cost=0.00..15.01 rows=1001 width=0)
(actual time=0.007..0.429 rows=1001 loops=1)"
"  ->  Materialize  (cost=0.00..562.67 rows=20 width=4) (actual
time=0.000..0.850 rows=4001 loops=1001)"
"->  Subquery Scan on t  (cost=0.00..562.57 rows=20 width=4)
(actual time=0.026..14.731 rows=4001 loops=1)"
"  Filter: (t.foo IS NULL)"
"  ->  WindowAgg  (cost=0.00..522.56 rows=4001 width=4)
(actual time=0.025..12.637 rows=4001 loops=1)"
"->  Merge Join  (cost=0.00..462.55 rows=4001
width=4) (actual time=0.016..7.715 rows=4001 loops=1)"
"  Merge Cond: (acal.id = acal.id)"
"  ->  Index Scan using
allocation_calendar_pkey on allocation_calendar acal
(

[BUGS] BUG #5563: Odd behavior with aggregate_func(DISTINCT foo ORDER BY foo)

2010-07-16 Thread Daniel Grace

The following bug has been logged online:

Bug reference:  5563
Logged by:  Daniel Grace
Email address:  dgr...@wingsnw.com
PostgreSQL version: 9.0beta3
Operating system:   Windows XP 32-bit
Description:Odd behavior with aggregate_func(DISTINCT foo ORDER BY
foo)
Details: 

The manual states:
"If DISTINCT is specified in addition to an order_by_clause, then all the
ORDER BY expressions must match regular arguments of the aggregate; that is,
you cannot sort on an expression that is not included in the DISTINCT  list.
"

However, in some circumstances Postgres will fail  

DROP TABLE IF EXISTS foo;
CREATE TABLE foo (
t VARCHAR
);

INSERT INTO foo (t) VALUES ('a'), ('a'), ('b'), ('b'), ('c');

SELECT STRING_AGG(DISTINCT t::TEXT ORDER BY t::TEXT) FROM foo;

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5564: Odd behavior with aggregate_func(DISTINCT foo ORDER BY foo)

2010-07-16 Thread Daniel Grace

The following bug has been logged online:

Bug reference:  5564
Logged by:  Daniel Grace
Email address:  dgr...@wingsnw.com
PostgreSQL version: 9.0beta3
Operating system:   Windows XP 32-bit
Description:Odd behavior with aggregate_func(DISTINCT foo ORDER BY
foo)
Details: 

(Apologies if there's a duplicate, I may have accidentally submitted too
early.  Tab+spacebar is a bad combination on browsers)

The manual states:
"If DISTINCT is specified in addition to an order_by_clause, then all the
ORDER BY expressions must match regular arguments of the aggregate; that is,
you cannot sort on an expression that is not included in the DISTINCT  list.
"

However, in some circumstances Postgres will fail  with "in an aggregate
with DISTINCT, ORDER BY expressions must appear in argument list" when the
same column is named in both places.  It appears to be related to cases when
the aggregate function in question requires implicit typecasts:

This test case fails with the above error:

DROP TABLE IF EXISTS foo;
CREATE TABLE foo (
t VARCHAR
);

INSERT INTO foo (t) VALUES ('a'), ('a'), ('b'), ('b'), ('c');

SELECT STRING_AGG(DISTINCT t ORDER BY t) FROM foo;

However, if t is cast to text in both halves of the aggregate function, it
works correctly:

SELECT STRING_AGG(DISTINCT t::TEXT ORDER BY t::TEXT) FROM foo;

It also works correctly if t is defined as TEXT instead of VARCHAR in the
table definition.

Note that if t is typecast in the ORDER BY but not the DISTINCT part, the
statement still fails (even though STRING_AGG implicitly casts t to text)

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5563: Odd behavior with aggregate_func(DISTINCT foo ORDER BY foo)

2010-07-23 Thread Daniel Grace
On Fri, Jul 23, 2010 at 10:42 AM, Alex Hunsaker  wrote:
> On Fri, Jul 16, 2010 at 18:04, Daniel Grace  wrote:
>> However, in some circumstances Postgres will fail
>
> How exactly?
>
> Maybe its so obvious I missed it?
>

Please see BUG #5564 -- I accidentally submitted this one before I was
finished typing the details.

-- Daniel

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5645: Query Optimizer fails when it encounters an unsatisfiable part of a query

2010-09-06 Thread Daniel Wagner

The following bug has been logged online:

Bug reference:  5645
Logged by:  Daniel Wagner
Email address:  dt...@cam.ac.uk
PostgreSQL version: 8.4.4
Operating system:   tested under Linux (Ubuntu 10.4) and Windows (7)
Description:Query Optimizer fails when it encounters an
unsatisfiable part of a query
Details: 

I posted this earlier on the general mailing list and received no reply. I
assume I found a genuine bug:

I am using Postgres 8.4.4 on a large-ish amount of data and recently noticed
that my application got very slow at times. I quickly discovered that a
specific query was triggering a sequential scan despite suitable indices
being available. The query in question looks like this:
"select * from kvstore where deviceid = 7 AND (locid >= 1410929 AND locid
<=
1690468) OR (locid = 1690469 and locid <= 1690468)"

Note that the last condition (locid = 2 AND locid <= 1) can never be
satisfied.
Now, the Postgres optimizer seems to believe that a sequential scan of 16
million rows is the right way of approaching this query, despite having
accurate statistics (I ran VACUUM ANALYZE before to ensure everything is
up-to-date).

However, if I remove the last part and query for "select * from kvstore
where deviceid = 7 AND (locid  >= 1410929 AND locid <= 1690468)", indices
are used and everything works nicely. And I believe that the optimizer
should remove an invalid query, or at least handle it gracefully (e.g. use
it as a parameter for a range query). Since it doesn't do that, I am a
little stumped as to what the correct course of action for me is. I could
try to manually remove "invalid" parts of my query, but then again I don't
want to be patching queries to accommodate a stubborn optimizer if I don't
have to... maybe I stumbled upon a bug?

If you have any further questions please do not hesitate to ask! I'd love to
resolve this issue soon!

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5654: Deferred Constraints don't work

2010-09-13 Thread Daniel Howard

The following bug has been logged online:

Bug reference:  5654
Logged by:  Daniel Howard
Email address:  cheesero...@yahoo.com
PostgreSQL version: 8.4.4
Operating system:   Linux (Ubuntu 10.04.1)
Description:Deferred Constraints don't work
Details: 

The command
SET CONSTRAINTS ALL DEFERRED
seems to have no effect.

According to the manual here:
http://www.postgresql.org/docs/8.4/interactive/sql-set-constraints.html
If a constraint is defined as deferrable, then you can instruct postgres to
wait until the end of a transaction block before checking the constraint. 
This is supposed to work for foreign key constraints.
The simple test case below demonstrates that postgres ignores the set
constraint command and checks the constraint in the middle of a
transaction.


-- Setup two tables, users and items.  One user can have many items.
CREATE TABLE users (id serial PRIMARY KEY, name text NOT NULL);
--NOTICE:  CREATE TABLE will create implicit sequence "users_id_seq" for
serial column "users.id"
--NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
"users_pkey" for table "users"
--CREATE TABLE
INSERT INTO users (id, name) VALUES (1,'Daniel');
--INSERT 0 1
CREATE TABLE items (id serial PRIMARY KEY, user_id integer NOT NULL
REFERENCES users ON DELETE RESTRICT DEFERRABLE, itemname text);
--NOTICE:  CREATE TABLE will create implicit sequence "items_id_seq" for
serial column "items.id"
--NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
"items_pkey" for table "items"
--CREATE TABLE
INSERT INTO items (user_id, itemname) VALUES (1,'hat');
--INSERT 0 1
--
-- Expect the following to fail because of the foreign key constraint
DELETE FROM users;
--ERROR:  update or delete on table "users" violates foreign key constraint
"items_user_id_fkey" on table "items"
--DETAIL:  Key (id)=(1) is still referenced from table "items".
--
-- Try it in a transaction with the constraint deferred
BEGIN;
--BEGIN
SET CONSTRAINTS ALL DEFERRED;
--SET CONSTRAINTS
-- This time it should work, because the constraint shouldn't be checked
until the end of the transaction
DELETE FROM users;
--ERROR:  update or delete on table "users" violates foreign key constraint
"items_user_id_fkey" on table "items"
--DETAIL:  Key (id)=(1) is still referenced from table "items".
ROLLBACK;
--ROLLBACK

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5654: Deferred Constraints don't work

2010-09-14 Thread Daniel Howard
Thank you Tom for your clear and swift reply.
In case others need it, I'll briefly explain why this issue came about and how 
I eventually solved it.
I am working on a web application which uses postgres as a back end database.  
For unit testing purposes I have set up a test database and a test user.  The 
tests require the database to be reset to a predefined state (database 
fixtures).  I do not want the scripts that handle resetting the database to 
need to know in which order to delete data from tables before reloading the 
fixture data.  I thought that if all my foreign-key constraints were 
deferrable, and I ran the whole thing in a transaction with constraints 
deferred, then I would be able to delete and add the data in any order I 
wanted, provided it was all referentially correct at the end.
However, because of the behavior you explained, the scripts were failing when 
they tried to delete a rows with foreign key constraints.
One proposed solution was to run the tests as a superuser, and disable all 
table triggers, then enable at the end.  I rejected this because firstly 
running tests as a superuser is asking for trouble, and I was also worried what 
state it would leave the database in if the supplied data was not referentially 
correct.
A better solution in my view is to use the postgres TRUNCATE command, instead 
of DELETE to remove the rows.
Documentation for TRUNCATE:TRUNCATE quickly removes all rows from a set of 
tables. It has the same effect as an unqualified DELETE on each table, but 
since it does not actually scan the tables it is faster. Furthermore, it 
reclaims disk space immediately, rather than requiring a subsequent VACUUM 
operation. This is most useful on large tables.
If you issue the command "TRUNCATE tablename CASCADE" then the data in the 
table is removed without doing the referential integrity checks.  It is safe to 
do this, because if there are any foreign key constraints, then the dependent 
tables are truncated too.
This is perfect for my situation.  Not only can I safely remove the data in 
preparation for a unit test, but I can do so more quickly than using DELETE.
After that, I can safely insert the data in any order because of the described 
behavior of SET CONSTRAINTS DEFERRED;
My transaction now looks like this:
BEGIN;SET CONSTRAINTS ALL DEFERRED;TRUNCATE table1 CASCADE;TRUNCATE table2 
CASCADE;  etcINSERT INTO table1 VALUES blah blah ...INSERT INTO table2 
VALUES blah blah ...etcCOMMIT;
Best regards, Daniel


--- On Mon, 13/9/10, Tom Lane  wrote:

From: Tom Lane 
Subject: Re: [BUGS] BUG #5654: Deferred Constraints don't work
To: "Daniel Howard" 
Cc: pgsql-bugs@postgresql.org
Date: Monday, 13 September, 2010, 16:08

"Daniel Howard"  writes:
> The command
> SET CONSTRAINTS ALL DEFERRED
> seems to have no effect.

Yes it does.  For instance, in your example setting the mode to deferred
will allow you to insert an items row that doesn't match any users row:

regression=# insert into items(user_id) values(42);
ERROR:  insert or update on table "items" violates foreign key constraint 
"items_user_id_fkey"
DETAIL:  Key (user_id)=(42) is not present in table "users".
regression=# begin;
BEGIN
regression=# SET CONSTRAINTS ALL DEFERRED;
SET CONSTRAINTS
regression=# insert into items(user_id) values(42);
INSERT 0 1
regression=# commit;
ERROR:  insert or update on table "items" violates foreign key constraint 
"items_user_id_fkey"
DETAIL:  Key (user_id)=(42) is not present in table "users".
regression=# 

What you wrote is

> CREATE TABLE items (id serial PRIMARY KEY, user_id integer NOT NULL
> REFERENCES users ON DELETE RESTRICT DEFERRABLE, itemname text);

The ON DELETE RESTRICT part is a "referential action", not a constraint
as such.  Our reading of the SQL standard is that referential actions
happen immediately regardless of deferrability of the constraint part.
So that's why you get an error on deletion of a users row.

            regards, tom lane



  

[BUGS] Planner producing 100% duplicate subplans when unneeded

2010-09-27 Thread Daniel Grace
ERE clause altogether.  Both plans run the same speed
with one parent_id.  The first plan starts losing speed gradually as
the number of parents increase; the second plan is either
all-or-nothing.



In the first case, it seems inefficient to duplicate the subplan for
each reference -- I'd think the (corrected) plan should look something
like this:

Seq Scan on wings_sky.parent p (cost=0.00..161113.12 rows=1000 width=4)
Output: p.id, (SubPlan 1), ((SubPlan 1))[1], ((SubPlan 1))[2], ((SubPlan 1))[3]
SubPlan 1
-> Aggregate (cost=40.26..40.27 rows=1 width=8)
Output: ARRAY[sum(c.v2), sum(CASE WHEN (c.v1 > 15) THEN c.v2 ELSE 0
END), sum(CASE WHEN (c.v1 > 5) THEN c.v2 ELSE 0 END)]
-> Index Scan using child_pkey on wings_sky.child c (cost=0.00..40.10
rows=20 width=8)
Output: c.parent_id, c.v1, c.v2
Index Cond: (c.parent_id = $0)

Is there any chance this might be looked at in a future release?

-- 
Daniel Grace
AGE, LLC
System Administrator and Software Developer
dgr...@wingsnw.com // www.wingsnw.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5688: ALTER TABLE ALTER col TYPE newtype fails if col is named in an UPDATE OF col trigger

2010-10-01 Thread Daniel Grace

The following bug has been logged online:

Bug reference:  5688
Logged by:  Daniel Grace
Email address:  dgr...@wingsnw.com
PostgreSQL version: 9.0.0
Operating system:   Windows XP 32-bit
Description:ALTER TABLE ALTER col TYPE newtype fails if col is named
in an UPDATE OF col trigger
Details: 

Given the following state:

CREATE TABLE foo (
bar TEXT,
baz TEXT
);

CREATE OR REPLACE FUNCTION foo_trigger_proc() RETURNS TRIGGER LANGUAGE
PLPGSQL AS $$BEGIN RETURN NEW; END$$;
CREATE TRIGGER foo_trigger BEFORE INSERT OR UPDATE OF bar ON foo FOR EACH
ROW EXECUTE PROCEDURE foo_trigger_proc();

The following happens:

> ALTER TABLE foo ALTER bar TYPE VARCHAR;
Fails: 
ERROR: unexpected object depending on column: trigger foo_trigger on table
foo
SQL state: XX000

> ALTER TABLE foo ALTER baz TYPE VARCHAR;
Succeeds (because baz is not named in the trigger)

> ALTER TABLE foo DROP bar;
Correctly produces an error message:
ERROR:  cannot drop table foo column bar because other objects depend on it
DETAIL:  trigger foo_trigger on table foo depends on table foo column bar

> ALTER TABLE foo DROP bar CASCADE;
Correctly removes bar and foo_trigger

ALTER TABLE foo RENAME bar and other variations of variations of ALTER
column seem to function correctly.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Planner producing 100% duplicate subplans when unneeded

2010-10-04 Thread Daniel Grace
As a theoretical question (I'm not savvy on Postgres's code but might
be intrigued enough to beat on it anyways), is it feasible to do an
additional pass on the query plan that essentially goes:

- Are these two subplans identical?
- Are they at the same part of the tree?

and if both of these conditions are true, discard one subplan and
rewrite all references to point to the other one?

Assuming it IS possible, are there any particular situations where it
wouldn't work?

On Mon, Oct 4, 2010 at 11:47 AM, Robert Haas  wrote:
> On Mon, Sep 27, 2010 at 5:09 PM, Daniel Grace  wrote:
>> Is there any chance this might be looked at in a future release?
>
> This is another interesting example of a case where an inlining-type
> optimization (which is effectively what's happening here, I think)
> turns out to be a negative.  We had one a while back that involved
> actual function inlining, which is not quite what's happening here,
> but it's close.  It doesn't seem too hard to figure out whether or not
> inlining is a win (non-trivial subexpressions should probably never
> get duplicated), but nobody's gotten around to writing the logic to
> make it work yet.  One useful technique is to stick "OFFSET 0" into
> the subquery; that prevents it from being inlined and gives you
> something more like the plan you were hoping for.
>
> Whether or not this will get looked at in a future release is a tough
> question to answer.  It's possible that someone (most likely, Tom)
> will get motivated to fix this out of sheer annoyance with the current
> behavior, or will notice a way to improve it incidentally while making
> some other change.  But of course the only way to make sure it gets
> fixed is to do it yourself (or pay someone to do it).
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise Postgres Company
>



-- 
Daniel Grace
AGE, LLC
System Administrator and Software Developer
dgr...@wingsnw.com // (425)327-0079 // www.wingsnw.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5709: PostgreSQL server 8.4.5 does not compile with gcc 4.5

2010-10-14 Thread Daniel Gerzo

The following bug has been logged online:

Bug reference:  5709
Logged by:  Daniel Gerzo
Email address:  dan...@freebsd.org
PostgreSQL version: 8.4.5
Operating system:   FreeBSD
Description:PostgreSQL server 8.4.5 does not compile with gcc 4.5
Details: 

This is the error I get:

gmake[1]: Leaving directory
`/usr/ports/databases/postgresql84-server/work/postgresql-8.4.5/src/timezone
'
gcc45 -O2 -pipe -Wl,-rpath=/usr/local/lib/gcc45 -O3 -flto -march=nocona -O3
-funroll-loops -fno-strict-aliasing -Wall -Wmissing-prototypes
-Wpointer-arith -Wdeclaration-after-statement -Wendif-labels
-fno-strict-aliasing -fwrapv -L../../src/port
-Wl,-rpath=/usr/local/lib/gcc45 -O3 -flto -L/usr/local/lib
-rpath=/usr/lib:/usr/local/lib -L/usr/local/lib -L/usr/local/lib
-L/usr/local/lib  -L/usr/local/lib -Wl,--as-needed -Wl,-R'/usr/local/lib'
-Wl,-export-dynamic access/common/heaptuple.o access/common/indextuple.o
access/common/printtup.o access/common/reloptions.o access/common/scankey.o
access/common/tupdesc.o access/gist/gist.o access/gist/gistutil.o
access/gist/gistxlog.o access/gist/gistvacuum.o access/gist/gistget.o
access/gist/gistscan.o access/gist/gistproc.o access/gist/gistsplit.o
access/hash/hash.o access/hash/hashfunc.o access/hash/hashinsert.o
access/hash/hashovfl.o access/hash/hashpage.o access/hash/hashscan.o
access/hash/hashsearch.o access/hash/hashsort.o access/hash/hashutil.o
access/heap/heapam.o access/heap/hio.o access/heap/pruneheap.o
access/heap/rewriteheap.o access/heap/syncscan.o access/heap/tuptoaster.o
access/heap/visibilitymap.o access/index/genam.o access/index/indexam.o
access/nbtree/nbtcompare.o access/nbtree/nbtinsert.o access/nbtree/nbtpage.o
access/nbtree/nbtree.o access/nbtree/nbtsearch.o access/nbtree/nbtutils.o
access/nbtree/nbtsort.o access/nbtree/nbtxlog.o access/transam/clog.o
access/transam/transam.o access/transam/varsup.o access/transam/xact.o
access/transam/xlog.o access/transam/xlogutils.o access/transam/rmgr.o
access/transam/slru.o access/transam/subtrans.o access/transam/multixact.o
access/transam/twophase.o access/transam/twophase_rmgr.o
access/gin/ginutil.o access/gin/gininsert.o access/gin/ginxlog.o
access/gin/ginentrypage.o access/gin/gindatapage.o access/gin/ginbtree.o
access/gin/ginscan.o access/gin/ginget.o access/gin/ginvacuum.o
access/gin/ginarrayproc.o access/gin/ginbulk.o access/gin/ginfast.o
bootstrap/bootparse.o bootstrap/bootstrap.o catalog/catalog.o
catalog/dependency.o catalog/heap.o catalog/index.o catalog/indexing.o
catalog/namespace.o catalog/aclchk.o catalog/pg_aggregate.o
catalog/pg_constraint.o catalog/pg_conversion.o catalog/pg_depend.o
catalog/pg_enum.o catalog/pg_inherits.o catalog/pg_largeobject.o
catalog/pg_namespace.o catalog/pg_operator.o catalog/pg_proc.o
catalog/pg_shdepend.o catalog/pg_type.o catalog/storage.o catalog/toasting.o
parser/analyze.o parser/gram.o parser/keywords.o parser/parser.o
parser/parse_agg.o parser/parse_cte.o parser/parse_clause.o
parser/parse_expr.o parser/parse_func.o parser/parse_node.o
parser/parse_oper.o parser/parse_relation.o parser/parse_type.o
parser/parse_coerce.o parser/parse_target.o parser/parse_utilcmd.o
parser/scansup.o parser/kwlookup.o commands/aggregatecmds.o commands/alter.o
commands/analyze.o commands/async.o commands/cluster.o commands/comment.o
commands/conversioncmds.o commands/copy.o commands/dbcommands.o
commands/define.o commands/discard.o commands/explain.o
commands/foreigncmds.o commands/functioncmds.o commands/indexcmds.o
commands/lockcmds.o commands/operatorcmds.o commands/opclasscmds.o
commands/portalcmds.o commands/prepare.o commands/proclang.o
commands/schemacmds.o commands/sequence.o commands/tablecmds.o
commands/tablespace.o commands/trigger.o commands/tsearchcmds.o
commands/typecmds.o commands/user.o commands/vacuum.o commands/vacuumlazy.o
commands/variable.o commands/view.o executor/execAmi.o
executor/execCurrent.o executor/execGrouping.o executor/execJunk.o
executor/execMain.o executor/execProcnode.o executor/execQual.o
executor/execScan.o executor/execTuples.o executor/execUtils.o
executor/functions.o executor/instrument.o executor/nodeAppend.o
executor/nodeAgg.o executor/nodeBitmapAnd.o executor/nodeBitmapOr.o
executor/nodeBitmapHeapscan.o executor/nodeBitmapIndexscan.o
executor/nodeHash.o executor/nodeHashjoin.o executor/nodeIndexscan.o
executor/nodeMaterial.o executor/nodeMergejoin.o executor/nodeNestloop.o
executor/nodeFunctionscan.o executor/nodeRecursiveunion.o
executor/nodeResult.o executor/nodeSeqscan.o executor/nodeSetOp.o
executor/nodeSort.o executor/nodeUnique.o executor/nodeValuesscan.o
executor/nodeCtescan.o executor/nodeWorktablescan.o executor/nodeLimit.o
executor/nodeGroup.o executor/nodeSubplan.o executor/nodeSubqueryscan.o
executor/nodeTidscan.o executor/nodeWindowAgg.o executor/tstoreReceiver.o
executor/spi.o foreign/foreign.o lib/dllist.o lib/stringinfo.o
libpq/be-fsstubs.o libpq/be-secure.o libpq/auth.o libpq/crypt.o libpq/hba.o
l

[BUGS] BUG #5792: INSTALLER FAILS!!!

2010-12-17 Thread Daniel Witkowski

The following bug has been logged online:

Bug reference:  5792
Logged by:  Daniel Witkowski
Email address:  daniel.witkow...@gmail.com
PostgreSQL version: 8.4, 9.0
Operating system:   Windows XP
Description:INSTALLER FAILS!!!
Details: 

Make it usable. I tried to install it on admin and not-admin user. I removed
user, service, run vbs scripts manually... All the time I got errors: All
the time i got error: problem running post-install step

This sucks!

Regards,
Daniel

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5842: Memory leak in PL/Python when taking slices of results

2011-01-18 Thread Daniel Popowich

The following bug has been logged online:

Bug reference:  5842
Logged by:  Daniel Popowich
Email address:  danielpopow...@gmail.com
PostgreSQL version: 8.4.6
Operating system:   x86_64-pc-linux-gnu (Ubuntu 10.04.1)
Description:Memory leak in PL/Python when taking slices of results
Details: 

There is a memory leak in PL/Python when taking slices of results.  This was
first discussed in pgsql-general:

  http://archives.postgresql.org/pgsql-general/2011-01/msg00367.php

Thanks to Alex Hunsaker for pinpointing the problem to slices.  The
following code (a modification of Alex's) demonstrates the problem well...in
a database with plpythonu installed:

-- leaks big time
CREATE  or replace FUNCTION py_leak() RETURNS void
   LANGUAGE plpythonu
   AS $$
results = plpy.execute("""select generate_series(0, 100)""")
slice_creates_leak = results[:]
for r in slice_creates_leak:
pass
return
$$;

-- does not leak
CREATE  or replace FUNCTION py_no_leak() RETURNS void
   LANGUAGE plpythonu
   AS $$
results = plpy.execute("""select generate_series(0, 100)""")
for noleak in results:
pass
return
$$;


I traced the bug to PLy_result_slice() in src/pl/plpython/plpython.c.  That
function calls the python API function PyList_GetSlice() and erroneously
increments the reference count before returning the result to the caller. 
PyList_GetSlice returns a *new* reference, not a borrowed one, so it should
just return the object as-is.

A patch is attached below.

Cheers,

Dan Popowich


--


*** src/pl/plpython/plpython.c~ 2010-12-13 21:59:19.0 -0500
--- src/pl/plpython/plpython.c  2011-01-18 11:18:28.857831733 -0500
***
*** 2328,2341 
  static PyObject *
  PLy_result_slice(PyObject *arg, Py_ssize_t lidx, Py_ssize_t hidx)
  {
-   PyObject   *rv;
PLyResultObject *ob = (PLyResultObject *) arg;
  
!   rv = PyList_GetSlice(ob->rows, lidx, hidx);
!   if (rv == NULL)
!   return NULL;
!   Py_INCREF(rv);
!   return rv;
  }
  
  static int
--- 2328,2336 
  static PyObject *
  PLy_result_slice(PyObject *arg, Py_ssize_t lidx, Py_ssize_t hidx)
  {
PLyResultObject *ob = (PLyResultObject *) arg;
  
!   return PyList_GetSlice(ob->rows, lidx, hidx);
  }
  
  static int

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] 8.3.5: Types with typnamespace pointing at non-existent pg_namespace oid

2011-02-21 Thread Daniel Farina
A medium-length story short, this query returns non-zero:

select count(distinct typnamespace) from pg_type where  not exists
(select 1 from pg_namespace where oid = pg_type.typnamespace);

I did a very brief search in all the release notes for 8.3.5 to
8.3.14, but have not found precisely what I was looking for (searches
for namespace, schema, type, and corruption).

This was discovered when performing a pg_dump of this user's database,
whereby pg_dump complained when trying to dump types for lack of a
good catalog entry to nab the namespace name from. In our case, two
namespaces seem to be affected. The user of this database was never
privileged enough to even perform CREATE SCHEMA, to my knowledge, and
in fact only have the schema (owned by the postgres superuser) that
they began with.

Is it safe to perform an UPDATE on pg_type to give entries a valid
typnamespace? Is there any forensic evidence I can grab before doing
that to assist in figuring out the mechanism for this bug, if
applicable?

Cheers.

--
fdr

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Re: 8.3.5: Types with typnamespace pointing at non-existent pg_namespace oid

2011-02-21 Thread Daniel Farina
On Mon, Feb 21, 2011 at 10:43 PM, Daniel Farina  wrote:
> A medium-length story short, this query returns non-zero:
>
> select count(distinct typnamespace) from pg_type where  not exists
> (select 1 from pg_namespace where oid = pg_type.typnamespace);
>
> I did a very brief search in all the release notes for 8.3.5 to
> 8.3.14, but have not found precisely what I was looking for (searches
> for namespace, schema, type, and corruption).

It may also be useful information to know that no recent shenanigans
have happened on this server: it's been up continuously for about 500
days. That doesn't mean something interesting did not occur a very
long time ago, and I'm currently asking around for any notes about
interesting things that have occurred on this machine.

-- 
fdr

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Re: 8.3.5: Types with typnamespace pointing at non-existent pg_namespace oid

2011-02-22 Thread Daniel Farina
On Mon, Feb 21, 2011 at 10:46 PM, Daniel Farina  wrote:
> It may also be useful information to know that no recent shenanigans
> have happened on this server: it's been up continuously for about 500
> days. That doesn't mean something interesting did not occur a very
> long time ago, and I'm currently asking around for any notes about
> interesting things that have occurred on this machine.

>From what I can tell, people only see this problem with pg_dump, which
is interesting. This symptom has a very long history:

http://archives.postgresql.org/pgsql-general/2004-02/msg00970.php
http://archives.postgresql.org/pgsql-admin/2006-10/msg00192.php
http://archives.postgresql.org/pgsql-bugs/2005-11/msg00305.php
http://archives.postgresql.org/pgsql-bugs/2010-01/msg00087.php
http://archives.postgresql.org/pgsql-general/2011-02/msg00334.php

Something I'm not sure any of these mention that's very interesting in
my case that may be crucial information:

In my case, there are two "missing" pg_namespace entries, and both
have the same missing relations. Both of them have "credible" looking
OIDs (in the hundreds of thousands, and one after the other) as well
as "credible" looking ancillary information:

* all owners are correct

* there are exactly four relfrozenxid values. They look like this:

SELECT distinct c.relnamespace, relfrozenxid::text
   FROM pg_class c
   LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
   LEFT JOIN pg_tablespace t ON t.oid = c.reltablespace
   WHERE nspname IS NULL;
 relnamespace | relfrozenxid
--+--
   320204 | 0
   320204 | 6573962
   320527 | 0
   320527 | 6574527

Note that relfrozenxic increases along with the oid, which is
generally what you'd expect. Some relations have no frozen xid.

* This is affecting the following features the user has used:
sequences, relations, indexes (in this case, they are all _pkey
indexes)

* There's also a valid version of all these relations/objects that
*are* connected to the schema that's alive and expected. As such, \dt,
\dn seem to work as one would expect.  The modern namespace OID is
378382, which is in line with a smooth monotonic increase over time.

* Each relkind has its own relfilenode, and they all do appear to
exist in the cluster directory. I didn't spot any big ones from a
random sampling (I can write a comprehensive one on request), but some
were 8KB and some were 16KB, which might suggest that some data is in
some of them.

More forensics tomorrow.

Sadly, for whatever reason, pg_dump --schema=public didn't seem to
help me out. We do need a workaround if we wish to keep doing
forensics.

-- 
fdr

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Re: 8.3.5: Types with typnamespace pointing at non-existent pg_namespace oid

2011-02-22 Thread Daniel Farina
On Tue, Feb 22, 2011 at 8:54 AM, Tom Lane  wrote:
> Daniel Farina  writes:
>> From what I can tell, people only see this problem with pg_dump, which
>> is interesting. This symptom has a very long history:
>
> Yeah.  There seems to be some well-hidden bug whereby dropping an object
> sometimes fails to drop (some of?) its dependencies.  I'm still looking
> for a reproducible case, or even a hint as to what the trigger condition
> might be.
>
>> In my case, there are two "missing" pg_namespace entries, and both
>> have the same missing relations.
>
> Uh, what do you mean by "same missing relations"?

There are an identical set of relations (including quasi-relations
like indexes and sequences) with relnames and most other properties
that are identical between the versions that are tied with each of the
two missing namespaces. There's also a superset of those (but that may
be partially or totally explained by the current set being more recent
as the application as grown) that are seen with a normal looking
pg_namespace record. All three copies of these formations seem to have
very sensible pg_class/pg_type/pg_sequence formations in their
respective relnamespaces.

>> * There's also a valid version of all these relations/objects that
>> *are* connected to the schema that's alive and expected.
>
> And this isn't making any sense to this onlooker, either.  Could you
> provide a more detailed explanation of the usage pattern in this
> database?  I speculate that what you mean is the user periodically
> drops and recreates a schema + its contents, but please be explicit.

We run quite a large number of databases, and I unfortunately think
that this particular fault has occurred in what could be called
ancient history, as far as log retention is concerned. Sadly our
investigation will have to be limited to what we can find at this
time, although we can probably slowly work our way to being able to
catch this one in the act. We might also be able to run a catalog
query across other databases to get a sense as to the frequency of the
problem.

It may be worth noting in this case that the user does not own the
schema that is thought to be dropped (or, in fact, any schemas at
all), so DROP SCHEMA as issued by them is not likely a culprit. I will
ask around as to what administrative programs we possess that might
fool with the schema. Still, such a program is probably run many times
across many databases. This is why I'm scratching my head about the
fact that two sets of such bogus relnamespace references were
produced.

Although I have no idea how such a thing could happen, is it possible
that both copies come from one occurrence of the bug?

> Yeah, pg_dump is written to glom onto everything listed in the catalogs
> and sort it out later.  So it tends to notice inconsistencies that you
> might not notice in regular usage of the database.  It's sort of hard to
> avoid, since for example a --schema switch depends on seeing which
> objects belong to which schema ...

I figured as much, although if it were written slightly differently
(starting from oid where nspname = 'public') then perhaps it would not
run into problems. I was meaning to poke at pg_depend to see if
anything interesting can be seen in there.

I'll probably hack up pg_dump to try to step around the yucky
relations so we can ensure that this database gets a clean-looking
restore elsewhere before we put the strange-looking database on ice --
permanently, if you think there is no value in having it around.

-- 
fdr

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5943: jdbc client doesn't omit pg_toast_temp tables from schema views

2011-03-23 Thread Daniel Ceregatti

The following bug has been logged online:

Bug reference:  5943
Logged by:  Daniel Ceregatti
Email address:  dan...@ceregatti.org
PostgreSQL version: N/A
Operating system:   N/A
Description:jdbc client doesn't omit pg_toast_temp tables from
schema views
Details: 

In the file org/postgresql/jdbc2/AbstractJdbc2DatabaseMetaData.java is the
query:

sql = "SELECT nspname AS TABLE_SCHEM FROM pg_catalog.pg_namespace WHERE
nspname <> 'pg_toast' AND nspname !~ '^pg_temp_' ORDER BY TABLE_SCHEM";

Using Data Tools Platform in eclipse I was able to see all the
pg_toast_temp_XXX schemas in the schema view.

Using postgresql-9.0-801.jdbc4.jar

Please note that RhodiumToad in #postgresql on irc.freenode.org actually
discovered this based on a conversation he, ojacobson, and I had, and I'm
simply reporting what he discovered.

Thanks,

Daniel

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5965: PostgreSQL crashes during connection

2011-04-06 Thread Daniel Migowski

The following bug has been logged online:

Bug reference:  5965
Logged by:  Daniel Migowski
Email address:  dmigow...@ikoffice.de
PostgreSQL version: 8.3.9
Operating system:   Windows 7
Description:PostgreSQL crashes during connection
Details: 

Hi,

My PostgreSQL server just crashed during an attempt to connect to it
(multiple times at once, a connection pool just started up).

These lines were found in the logfile afterwards:

2011-04-06 11:38:03 CEST [2860] LOG:  server process (PID 19812) was
terminated by exception 0xC12D
2011-04-06 11:38:03 CEST [2860] HINT:  See C include file "ntstatus.h" for a
description of the hexadecimal value.
2011-04-06 11:38:03 CEST [2860] LOG:  terminating any other active server
processes
2011-04-06 11:38:03 CEST [12208] WARNING:  terminating connection because of
crash of another server process
2011-04-06 11:38:03 CEST [12208] DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2011-04-06 11:38:03 CEST [12208] HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2011-04-06 11:38:03 CEST [23548] WARNING:  terminating connection because of
crash of another server process
2011-04-06 11:38:03 CEST [23548] DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2011-04-06 11:38:03 CEST [23548] HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2011-04-06 11:38:03 CEST [24024] WARNING:  terminating connection because of
crash of another server process
2011-04-06 11:38:03 CEST [24024] DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2011-04-06 11:38:03 CEST [24024] HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2011-04-06 11:38:03 CEST [120] WARNING:  terminating connection because of
crash of another server process
2011-04-06 11:38:03 CEST [120] DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2011-04-06 11:38:03 CEST [120] HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2011-04-06 11:38:03 CEST [10948] WARNING:  terminating connection because of
crash of another server process
2011-04-06 11:38:03 CEST [10948] DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2011-04-06 11:38:03 CEST [10948] HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2011-04-06 11:38:03 CEST [26172] WARNING:  terminating connection because of
crash of another server process
2011-04-06 11:38:03 CEST [26172] DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2011-04-06 11:38:03 CEST [26172] HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2011-04-06 11:38:03 CEST [12176] WARNING:  terminating connection because of
crash of another server process
2011-04-06 11:38:03 CEST [12176] DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2011-04-06 11:38:03 CEST [12176] HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2011-04-06 11:38:03 CEST [3420] WARNING:  terminating connection because of
crash of another server process
2011-04-06 11:38:03 CEST [3420] DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2011-04-06 11:38:03 CEST [3420] HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2011-04-06 11:38:03 CEST [11184] WARNING:  terminating connection because of
crash of another server process
2011-04-06 11:38:03 CEST [11184] DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2011-04-06 11:38:03 CEST [11184] HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2011-04-06 11:38:03 CEST [7664] WARNING:  terminating connection because of
crash of another server proce

[BUGS] BUG #5969: ecpg can't see sqlca

2011-04-09 Thread Daniel Lyons

The following bug has been logged online:

Bug reference:  5969
Logged by:  Daniel Lyons
Email address:  fus...@storytotell.org
PostgreSQL version: 9.0.3
Operating system:   FreeBSD 8.2-RELEASE
Description:ecpg can't see sqlca
Details: 

Under the current FreeBSD and the current PostgreSQL, ecpg-processed C
programs don't see the sqlca global variable. This wreaks havoc with error
handling. For example:

#include 
#include 

EXEC SQL INCLUDE sqlca;

int main()
{
  /* stop on any error */
  EXEC SQL WHENEVER SQLERROR STOP;

  /* connect to a non-existent database */
  EXEC SQL CONNECT TO nonexistentdb USER nonexistentuser;

  /* if we get here, something is wrong. */
  printf("I'm still here, even though this database doesn't exist.\n");
  return 0;
}

Same program does the right thing on a Linux machine I have access to, so it
may be a platform issue.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5985: CLUSTER ... USING can fail with ERROR: index xxx does not belong to table yyy

2011-04-18 Thread Daniel Grace

The following bug has been logged online:

Bug reference:  5985
Logged by:  Daniel Grace
Email address:  thisgenericn...@gmail.com
PostgreSQL version: 9.1a5
Operating system:   Win7 x64, also seen on Debian
Description:CLUSTER ... USING can fail with ERROR:  index xxx does
not belong to table yyy
Details: 

[Apologies if this duplicates.  NoScript apparently thought me submitting
this was XSS, so I'm not sure if the first report went through or not.]

I've had no luck reducing this to a reproducible test case, but here goes
anyways:

I have a lengthy script that consists of inputting a bunch of SQL files into
Postgres in sequence.  Essentially the first file is importing a database
from MySQL and the subsequent files are doing schema alterations (separated
out by table), with a sort of dependency solving mechanism built into the
script.

One such file (contents listed below, admittably not 100% useful without a
full schema and data) makes a bunch of alterations to a table but fails when
it reaches CLUSTER:

psql:D:/SVN/wings/wings/branches/devpg/db/lib/course.sql:38: ERROR:  index
17813
8 does not belong to table 176177

However, this failure only occurs if the file is wrapped in a transaction
block.  Outside of a transaction block, it works fine.

My theory is this has something to do with the new CLUSTER change, in
conjunction with the fact that an index with the specified name is being
dropped and then created in the same transaction.  The same dataset on 9.0
works without any issues.

Partial SQL is below.  Note that it works when not wrapped with a
transaction block.

-- @Requires: data, flags
-- @Provides: course

CREATE TRIGGER _restrict BEFORE UPDATE OF id ON course EXECUTE PROCEDURE
update_restricted__tproc();
ALTER TABLE course
ALTER credit_designation_inherit TYPE BOOLEAN USING
credit_designation_inherit<>0,
ALTER credit_designation_inherit SET DEFAULT TRUE,
ADD FOREIGN KEY(gid) REFERENCES group_info(id) ON UPDATE CASCADE ON 
DELETE
RESTRICT,
ALTER section DROP NOT NULL,
ALTER slp_mindays DROP NOT NULL,
ALTER credits DROP NOT NULL,
DROP COLUMN IF EXISTS lock_token
;

UPDATE course SET section=NULLIF(section, 0), credits=NULLIF(credits,0),
slp_mindays=NULLIF(slp_mindays,0);

--These aren't immutable.
--CREATE INDEX course_ix_start ON course ((startdate+starttime));
--CREATE INDEX course_ix_end   ON course ((enddate+endtime));

DROP INDEX course_ix_flags;
DROP INDEX course_ix_location;

DROP INDEX course_ix_credit_designation_inherit;
DROP INDEX course_ix_grademin;
CREATE INDEX course_ix_grademin ON course(grademin);
CREATE INDEX course_ix_grademax ON course(grademax);

DROP INDEX course_ix_year;
CREATE INDEX course_ix_year ON course(year);

DROP INDEX course_ix_origin_course;
CREATE INDEX course_ix_origin_course ON course(origin_course) WHERE
origin_course IS NOT NULL;

DROP INDEX course_ix_origin_op;
CREATE INDEX course_ix_origin_op ON course(origin_op) WHERE origin_op IS NOT
NULL;

CLUSTER VERBOSE course USING course_ix_year;
ANALYZE course;

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5986: pg_dump output is not ordered

2011-04-19 Thread Daniel Migowski

The following bug has been logged online:

Bug reference:  5986
Logged by:  Daniel Migowski
Email address:  dmigow...@ikoffice.de
PostgreSQL version: 8.3.14
Operating system:   Many
Description:pg_dump output is not ordered
Details: 

Hi,

I would love to give pg_dump an option to produce ordered output. This would
make it much easier to create dump files that are comparable with a textual
diff tools.

The following orderings are required:

* Dump objects ordered by name. I know this cannot be done with all objects,
but at least remain a reproducable, deterministic order.

Also nice would be:

* Dump data ordered by primary key, or if not available, dump data ordered
by all columns. 

In addition it would be nice to have an option to omit oid comments, because
oids are usually not used in databases.

Thank you very much.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5987: Rows created by WITH (INSERT ... RETURNING) are not visible to the rest of the query

2011-04-19 Thread Daniel Grace

The following bug has been logged online:

Bug reference:  5987
Logged by:  Daniel Grace
Email address:  dgr...@wingsnw.com
PostgreSQL version: 9.1-alpha5
Operating system:   Win7 x64
Description:Rows created by WITH (INSERT ... RETURNING) are not
visible to the rest of the query
Details: 

This may not be a bug, but I'll describe it anyways.  Apologies if this is a
duplicate -- "WITH" is apparently a stopword and searching the archives
using it produces zero results.

9.1 adds the ability to use WITH in INSERT/UPDATE/DELETE in both
directions.

However, the actual effects of the query in the WITH clause are not visible
to the outer query.  

-
DROP TABLE IF EXISTS a;
CREATE TABLE a ( t TEXT );

INSERT INTO a VALUES ('test1') RETURNING *; -- Works as expected

WITH b AS (INSERT INTO a VALUES ('test2') RETURNING *) SELECT * FROM b; --
Works as expected

WITH b AS (INSERT INTO a VALUES ('test3') RETURNING *) SELECT a.* FROM a
INNER JOIN b USING(t);  -- Does not see the newly created row.

SELECT * FROM a WHERE t='test3';-- But it was created.

WITH b AS (INSERT INTO a VALUES ('test4') RETURNING *) -- Does not see the
newly created row, thus the update does not happen.
UPDATE a SET t='test5'
FROM b
WHERE a.t=b.t;

SELECT * FROM a;
-
This is also true if the WITH query is a stored procedure that modifies the
database and returns results, i.e. WITH b AS (SELECT * FROM
create_row('test6'))...

Presumably it affects UPDATE and DELETE as well, but I didn't test those
cases.

My actual use case is: I'm calling a function to duplicate+modify some rows.
 (Essentially, it does INSERT ... SELECT from the same table, but forcing
the primary key to be reassigned via being a serial column and some other
changes).  This function returns the new rows as results (doing RETURN QUERY
INSERT ... SELECT ... RETURNING *).  In some situations, I want to further
update the freshly created rows, so the goal was to do this:

WITH newrows AS (SELECT * FROM function_that_creates_rows(...))
UPDATE basetable SET foo=overrides.bar
FROM newrows 
LEFT JOIN (VALUES (...)) AS overrides(...)
WHERE ...

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #6005: ALTER ROLE ... VALID UNTIL 'infinity' crashes postmaster on a fresh install

2011-05-04 Thread Daniel Grace

The following bug has been logged online:

Bug reference:  6005
Logged by:  Daniel Grace
Email address:  dgr...@wingsnw.com
PostgreSQL version: 9.1-beta1
Operating system:   Win7 x64 (x86 postgres)
Description:ALTER ROLE ... VALID UNTIL 'infinity' crashes postmaster
on a fresh install
Details: 

While trying to restore a database created under 9.1 alpha (created with
pg_dumpall):

CREATE ROLE postgres;
ALTER ROLE postgres WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN
REPLICATION PASSWORD '';
CREATE ROLE foo;
ALTER ROLE foo WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN
NOREPLICATION VALID UNTIL 'infinity';

The final statement causes postmaster to crash.  I attempted it again
breaking each part of the ALTER ROLE into individual statements, e.g.
ALTER ROLE foo WITH NOSUPERUSER;
ALTER ROLE foo WITH INHERIT;
...

and it was the ALTER ROLE foo VALID UNTIL 'infinity' line causing the crash.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6005: ALTER ROLE ... VALID UNTIL 'infinity' crashes postmaster on a fresh install

2011-05-12 Thread Daniel Grace
I don't have the means to easily compile a PG build, but if there's a
place to get nightly builds or such I'd be happy to give it a shot and
report back.

On Wed, May 11, 2011 at 5:02 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> Will commit 2e82d0b396473b595a30f68b37b8dfd41c37dff8 have possibly fixed 
>> this?
>
> This should now be fixed as of HEAD, but it wouldn't be a bad idea for
> someone to check that it really works on Windows.  Look at whether the
> timezone_abbreviations GUC has a sane value and you can use timezone
> abbreviations, not just whether 'infinity' crashes.
>
>                        regards, tom lane
>



-- 
Daniel Grace
AGE, LLC
System Administrator and Software Developer

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #6035: server crash when executing recursive query (trying to allocate 16 Exabyte memory)

2011-05-23 Thread Daniel Schreiber

The following bug has been logged online:

Bug reference:  6035
Logged by:  Daniel Schreiber
Email address:  daniel.schrei...@ergora.eu
PostgreSQL version: 9.0.4
Operating system:   Debian Squeeze
Description:server crash when executing recursive query (trying to
allocate 16 Exabyte memory)
Details: 

When I execute the query at 
http://www.ergora.eu/data/postgresql/rekursiv_sl.sql on the data at
http://www.ergora.eu/data/postgresql/crashdump.sql.lzma I get unexpected
results:
- on a debian backports postgresql 9.0.4: server crashes, connection drops,
server starts up again. If I do the same on a hot standby server in recovery
mode, the standy crashes an does not startup again.
- on a postgresql compiled from sources with --enable-debug I get ERROR: 
invalid memory alloc request size 18446744055163387672

How can I help to analyze the issue?

Thanks,

Daniel

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #6170: hot standby wedging on full-WAL disk

2011-08-19 Thread Daniel Farina

The following bug has been logged online:

Bug reference:  6170
Logged by:  Daniel Farina
Email address:  dan...@heroku.com
PostgreSQL version: 9.0.4
Operating system:   GNU/Linux Ubuntu 10.04 x86_64
Description:hot standby wedging on full-WAL disk
Details: 

After seeing this a few times, I think I've found a reproducible way to
prevent Postgres from making progress with hot standby.

1) Set up a WAL disk that will run out of space in a reasonable amount of
time.

2) Run a hot standby with a restore_command and primary_connection_info set
in recovery.conf.  ***Configure it to disable query cancellation***.

3) Begin a transaction, or long-running statement that prevents the
application of WAL records.

When the hot standby falls behind the primary it'll eventually bump out of
streaming mode, and will accumulate WAL until the disk fills.

Eventually the WAL disk will fill, and the hot standby cannot make any
progress until one deletes some WAL segments or otherwise makes a tiny bit
more room to work with.  This state persists past killing the offensive
long-running-transaction backend and even a postgres restart.  In the latter
case, one cannot even become 'hot' again, getting the "database system is
starting up" message, as Postgres wants to run a restore_command
immediately.

Furthermore, it appears that WAL segments from the future part of the
timeline (beyond what is being recovered at the moment) are stored on-disk
at that time.  I also think I have identified some WAL segments that are
from before the prior checkpoint location via pg_controldata, so they
technically could be pruned.  My wal_keep_segments is set, but I am not sure
if this has an effect on a hot standby.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6170: hot standby wedging on full-WAL disk

2011-08-25 Thread Daniel Farina
On Thu, Aug 25, 2011 at 10:16 AM, Heikki Linnakangas
 wrote:
> On 25.08.2011 19:11, Robert Haas wrote:
>>
>> On Mon, Aug 22, 2011 at 2:57 AM, Heikki Linnakangas
>>   wrote:
>>>
>>> So the problem is that walreceiver merrily writes so much future WAL that
>>> it
>>> runs out of disk space? A limit on the maximum number of future WAL files
>>> to
>>> stream ahead would fix that, but I can't get very excited about it.
>>> Usually
>>> you do want to stream as much ahead as you can, to ensure that the WAL is
>>> safely on disk on the standby, in case the master dies. So the limit
>>> would
>>> need to be configurable.
>>
>> It seems like perhaps what we really need is a way to make replaying
>> WAL (and getting rid of now-unneeded segments) to take priority over
>> getting new ones.
>
> With the defaults we start to kill queries after a while that get in the way
> of WAL replay. Daniel had specifically disabled that. Of course, even with
> the query-killer disabled, it's possible for the WAL replay to fall so badly
> behind that you fill the disk, so a backstop might be useful anyway,
> although that seems a lot less likely in practice and if your standby can't
> keep up you're in trouble anyway.

I do think it's not a bad idea to have postgres prune unnecessary WAL
at least enough so it can get the WAL segment it wants -- basically
unsticking the recovery command so progress can be made. Right now
someone (like me) has to go and trim away what appear to be
unnecessary wal in (what is currently) a manual process.

Also, I'm not sure if the segments that are downloaded via
restore_command during the fall-behind time are "counted" towards
replay when un-sticking after a restart of postgres: in particular, I
believe that PG will want to copy the segments a second time, although
I'm not 100% sure right now.  Regardless, not being able to restart
properly or make progress after killing the offensive backend are
unhappy things.

More thoughts?

-- 
fdr

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #6200: standby bad memory allocations on SELECT

2011-09-08 Thread Daniel Farina

The following bug has been logged online:

Bug reference:  6200
Logged by:  Daniel Farina
Email address:  dan...@heroku.com
PostgreSQL version: 9.0.4
Operating system:   Ubuntu 10.04
Description:standby bad memory allocations on SELECT
Details: 

A huge thanks to Conrad Irwin of Rapportive for furnishing virtually all the
details of this bug report.

The following error occurs up to a couple of times a day on a busy
PostgreSQL database:

 ERROR: invalid memory alloc request size 18446744073709551613

The occurrence rate is somewhere in the one per tens-of-millions of
queries.

The number is always the same (2**64 - 3), and there's no obvious
pattern in the distribution of errors (they don't even appear to be
correlated with system load). The error has not been recorded on the
primary database, even though the same workload is submitted.

These errors do not reproduce, seeming to evaporate almost immediately on
the standby, so durable/long lasting index corruption is not likely.  This
problem has persisted among multiple generations of hot standbys on
different hardware and sourced from different base backups.

At least once, a hot standby was promoted to a primary and the errors seem
to discontinue, but then reappear on a newly-provisioned standby.

The VERSION() string is:
   PostgreSQL 9.0.4 on x86_64-pc-linux-gnu, compiled by GCC
   gcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 64-bit

The problem is confined to a particular access patterns and schema objects,
enumerated below:

The points2 table looks like:

   Table "public.points2"
 Column  |Type |
Modifiers
--+-+---
---
 id   | integer  | not null default nextval('points2_id_seq'::regclass)
 identifier   | text| not null
 scope_id | integer | not null
 class_number | smallint| not null
 authorization_id | integer | not null
 sum_json | text| not null
 amended_at   | timestamp without time zone | not null
Indexes:
   "points2_pkey" PRIMARY KEY, btree (id)
   "points2_special" btree (identifier_hash(identifier), scope_id,
   class_number, authorization_id)

CREATE FUNCTION identifier_hash(text) RETURNS bigint IMMUTABLE
LANGUAGE SQL AS $$
 SELECT ('x' || md5($1))::bit(64)::bigint;
$$;

This has only been seen on queries of the form:

SELECT * FROM "points2" WHERE
(identifier_hash(identifier) = identifier_hash('1104131405')
AND identifier = '1104131405'
AND scope_id = 0
AND authorization_id = 0
AND class_number = 25)

Though this table is accessed similarly frequently by queries of the form:

SELECT points2.* FROM points2
   JOIN (VALUES (8585261297509044776, 0, 47,
'ae9064e6f24127c6a1f483cd71e14e64'))
   AS query(hashed_identifier, scope_id, class_number, identifier)
   ON identifier_hash(points2.identifier) = query.hashed_identifier
   AND points2.scope_id = query.scope_id
   AND points2.class_number = query.class_number
   AND points2.identifier = query.identifier;

these do not trigger the problem.

The table is always updated to or inserted into one row at a time
(using the "id" primary key for updates), though we sometimes update
multiple rows in a single transaction, synchronous_commit is turned off for
connections that touch the points2 table on the primary.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5627: Can not install on POSReady 2009

2011-11-09 Thread Daniel Gochin
Hi Johnny,

What did you do to fix the problem?

Regards,
Daniel.


  1   2   >