[BUGS] BUG #2511: violation of primary key on update with 2 tables

2006-07-03 Thread james

The following bug has been logged online:

Bug reference:  2511
Logged by:  james
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1.1
Operating system:   windows xp professional sp2
Description:violation of primary key on update with 2 tables
Details: 

hi,

i've tried to update cust_survey_answer table...
this table links to customer table on
cust_survey_answer.memberno = cust_survey_answer.memberno

cust_survey_answer primary key was 
memberno & question_no

this table basically have unique combination of both
memberno & question_no ( answer sheet for customers )

right now, there are confirmed no duplicate of memberno & question_no
combination.

even in customer table, all memberno are uniques...
no duplicate memberno inside customer table.

what i did was, i combined the 2 table, and take the value
customer.dealercode to combine as 1 and store into
cust_survey_answer.memberno...

but when i run the query, the result shows me violation of primary key of
cust_survey_answer...


my temporary solution was, i remove the primary key of the
cust_survey_answer, and ran the script below:


update cust_survey_answer set memberno='0'+cast(customer.dealercode as
varchar
(5)) +'-'+ cust_survey_answer.memberno from customer where
cust_survey_answer.memberno=customer.memberno and customer.dealercode is not
null 
and length( trim( customer.dealercode ) ) > 0 and
cust_survey_answer.memberno not like '%-%' 
and cust_survey_answer.memberno is not null and cust_survey_answer.memberno
<> ''

=

after i've runs it, i try set back the primary key for table
cust_survey_answer ( combination of memberno & question_no ) , and IT WORKS!


i was surprised..

i think it's a bug in postgresql ...
please help...

thank you.

best regards,

James

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[BUGS] BUG #2569: statement_timeout bug on Windows

2006-08-08 Thread James

The following bug has been logged online:

Bug reference:  2569
Logged by:  James
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1.4
Operating system:   Windows 200 Pro SP4
Description:statement_timeout bug on Windows
Details: 

I'm using the latest version of postgresql (8.1.4) for Windows and I have a
problem with 'statement_timeout'.

Normally statement_timeout should "Abort any statement that takes over the
specified number of milliseconds". However on my pc, instead of milliseconds
it is tenth of seconds. For example: statement_timeout=30 actually means
'wait 3 seconds and abort' instead of wait 30 milliseconds.

I've tested this on the same version of postgresql on Linux and it works
correctly, as stated on the docs.

What do I do to find get this strange result? I do this.

set statement_timeout=30
show statement_timeout
VACUUM ANALYSE

The last statement is aborted after 3 seconds.


set statement_timeout=6
show statement_timeout
VACUUM ANALYSE

The last statement is aborted after 600 milliseconds.



Is this a bug (as I think) or could it be a misconfiguration of my OS, or of
postgresql?

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[BUGS] BUG #4025: wsock32.dll not found

2008-03-11 Thread James

The following bug has been logged online:

Bug reference:  4025
Logged by:  James
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.3.0.1
Operating system:   winXP
Description:wsock32.dll not found
Details: 

when installation reach initdb stage, it shows can not init the database , '
WSOCK32.DLL' not found, reinstall the application might fix the problem. '

Check wsock32.dll is in windows\system32 path,
run command 'netsh winsock reset' no help
Obviously wsock32.dll is available and reset it can not help fix the issue.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #2417: bug for finding string in column

2006-05-02 Thread James

The following bug has been logged online:

Bug reference:  2417
Logged by:  James
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1
Operating system:   Microsoft Windows XP Professional
Description:bug for finding string in column
Details: 

hi,

i've encounter bug where when a field is empty / null (type varchar(255) ),
when i use the sql query <> 'myvalue', it seems to be false all the time.


example:
select * from user_profile where acc_lock <> 'Y'

where the acc_lock field for all rows were empty...

2nd, hopefully that postgresql can be flexible to accept empty '' as null
search as well...

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[BUGS] BUG #2418: number & date & time

2006-05-02 Thread James

The following bug has been logged online:

Bug reference:  2418
Logged by:  James
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1
Operating system:   Microsoft Windows XP Professional
Description:number & date & time
Details: 

hi,

we would like to have convinience data conversion from datatype number of
value '' as null and datatype date / timestamp / timestampz of value '' as
null...

so far, postgresql will reject those value for number and date / time field
when it's empty string '' (as invalid date value or invalid number value)

it would be nice to have it auto converted :)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[BUGS] BUG #6413: pg_relation_size wont work on table with upper case chars

2012-01-28 Thread james
The following bug has been logged on the website:

Bug reference:  6413
Logged by:  James Stevenson
Email address:  ja...@stev.org
PostgreSQL version: 8.4.9
Operating system:   Linux - Debian Squeeze
Description:


I suspect pg_relation_size has a tolower in it to a table name called
Attempts will not work with pg_relation_size

Both

pg_relation_size('Attempts')
and
pg_relation_size('attempts')

fails with no such table.

After table rename from Attempts to attempts it will work.



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #7515: DROP TABLE IF EXISTS fails if schema does not exist

2012-09-03 Thread james
The following bug has been logged on the website:

Bug reference:  7515
Logged by:  James Bellinger
Email address:  ja...@illusorystudios.com
PostgreSQL version: 9.1.5
Operating system:   Ubuntu Linux 12.04 Server
Description:

If the table being referenced has a schema in its name, and the schema does
not exist, DROP TABLE IF EXISTS will have an ERROR instead of a NOTICE.

So for instance,
DROP TABLE IF EXISTS bar;
This is a NOTICE if bar does not exist.

DROP TABLE IF EXISTS foo.bar;
This is an ERROR if foo does not exist, even though that implies bar does
not exist which means it should be a NOTICE.

Saw this because it was making a drop/recreate transaction fail on me, after
I changed some code to use a schema.



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] full-text search doesn't fall back on sequential scan when it could

2008-07-15 Thread James Dietrich
Hi,

I am running PostgreSQL 8.3.3 on Linux 2.6.18
[EMAIL PROTECTED]:~/z$ uname -a
Linux io.mills.lan 2.6.18-6-vserver-k7 #1 SMP Sat Jun 7 01:10:29 UTC
2008 i686 GNU/Linux

Short version of the problem:
When I run a full-text search that requires a full scan of the table,
an error is returned (GIN indexes don't support sequential scan)
instead of falling back on a sequential scan which would return
the results of the query.

Long version:
I have a database with two tables named 'one' and 'two':

z1=> \d one
Table "public.one"
 Column | Type | Modifiers
+--+---
 a  | text |
Indexes:
"fts_a" gin (to_tsvector('simple'::regconfig, a))

z1=> \d two
Table "public.two"
 Column | Type | Modifiers
+--+---
 b  | text |
Indexes:
"fts_b" gin (to_tsvector('simple'::regconfig, b))

Table 'one' has 51 rows:
z1=> select * from one;
  a
--
 Two
 Three
 Four
 Five

 Forty nine
 Fifty
 Fifty one
 Fifty two
(51 rows)

Table 'two' has 5001 rows:
z1=> select * from two;

 b

 Fifty three
 Fifty four
 Fifty five
 Fifty six

 Five thousand fifty
 Five thousand fifty one
 Five thousand fifty two
 Five thousand fifty three
(5001 rows)

(At the bottom of this email I have copied the commands
I used to create the database.)

Now I run a full-text query on table 'one':
z1=> select count(*) from one where to_tsvector('simple', a) @@
to_tsquery('simple', '!Four');
 count
---
47
(1 row)

Running the same query under 'explain analyze' shows that the index is not being
used, but a sequential scan is being done:
z1=> explain analyze select a from one where to_tsvector('simple', a)
@@ to_tsquery('simple', '!Four');
  QUERY PLAN
---
 Seq Scan on one  (cost=0.00..1.77 rows=1 width=32) (actual
time=0.019..0.309 rows=47 loops=1)
   Filter: (to_tsvector('simple'::regconfig, a) @@ '!''four'''::tsquery)
 Total runtime: 0.341 ms
(3 rows)

That all works fine.

But all is not fine when I do the same thing on the longer table 'two':
z1=> \set VERBOSITY verbose
z1=> select count(*) from two where to_tsvector('simple', b) @@
to_tsquery('simple', '!Four');
ERROR:  0A000: query requires full scan, which is not supported by GIN indexes
LOCATION:  gin_extract_tsquery, tsginidx.c:74

I understand that this query does require a full scan, and I understand that
GIN indexes don't support a full scan, but why couldn't the planner fall back
to a sequential scan in this case? Of course it's slower, but I would prefer a
slower answer than failure with an error and no answer at all. I can simulate
this solution by doing the following, which forces a sequential scan.
z1=> set enable_bitmapscan to off;
SET
z1=> set enable_indexscan to off;
SET
z1=> select count(*) from two where to_tsvector('simple', b) @@
to_tsquery('simple', '!Four');
 count
---
  3277
(1 row)

z1=> explain analyze select count(*) from two where
to_tsvector('simple', b) @@ to_tsquery('simple', '!Four');
   QUERY PLAN
-
 Aggregate  (cost=114.03..114.04 rows=1 width=0) (actual
time=91.171..91.171 rows=1 loops=1)
   ->  Seq Scan on two  (cost=0.00..114.02 rows=5 width=0) (actual
time=0.028..89.598 rows=3277 loops=1)
 Filter: (to_tsvector('simple'::regconfig, b) @@ '!''four'''::tsquery)
 Total runtime: 91.208 ms
(4 rows)

Any thoughts? Is this something that could be fixed in a future version of
PostgreSQL? Or is the current behavior intentionally the way it is for
some reason I haven't thought of yet?

Thanks a lot for your help,
James Dietrich

P.S. Here is the procedure I used create the test database.
I can also provide the output of pg_dump upon request.

[EMAIL PROTECTED]:~/z$ psql template1 -U stariadmin -W
Password for user stariadmin:
Welcome to psql 8.3.3, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

template1=> create database z1;
CREATE DATABASE
template1=> \q
[EMAIL PROTECTED]:~/z$ psql z1 -U stariadmin -W -f a_commands
Password for user stariadmin:
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
CREATE SEQUENCE
CREATE LANGUAGE

[BUGS] BUG #5011: Standby recovery unable to follow timeline change

2009-08-25 Thread James Bardin

The following bug has been logged online:

Bug reference:  5011
Logged by:  James Bardin
Email address:  jbar...@bu.edu
PostgreSQL version: 8.4.0-1
Operating system:   Centos 5.3
Description:Standby recovery unable to follow timeline change
Details: 

This is another use case that fails with what looks like the same issue as
BUG #4796. 
http://archives.postgresql.org/pgsql-bugs/2009-05/msg00060.php 
(Sorry if this bug is redundant, I couldn't find any way to contribute to
that thread directly)

I'm working on a system where the master and standby servers are expected to
be able to swap roles repeatedly. The first failover works fine, but the
ex-master, now standby, can't recover using the shipped logs.

Using recovery_target_timeline='latest' finds the new history file, and
pg_standby looks good until recovery is attempted. Then we log errors like:

LOG:  unexpected timeline ID 0 in log file 0, segment 1, offset 0
LOG:  invalid primary checkpoint record

and any changes made after the first failover are lost.

Is this currently possible, or do I have to send a full file-level backup to
sync the ex-master server with the new master?

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5011: Standby recovery unable to follow timeline change

2009-08-26 Thread james bardin
On Wed, Aug 26, 2009 at 4:40 AM, Heikki
Linnakangas wrote:
>> Is this currently possible, or do I have to send a full file-level backup to
>> sync the ex-master server with the new master?
>
> That should work. (Note that you do need to restore the ex-master from
> the old base backup; you can't just copy recovery.conf to the old
> master's data directory.)

I'm relying on an rsync for the data directories after the recovery to
bring up the ex-master.
This works fine, but I'd rather be able to simply rely on wall
shipping to keep them synced without the extra backup procedures.
Thanks for looking into this.

Is this the type of fix that would make it into the next 8.4.x release?

-jim

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5275: validate_exec in port/exec.c only reads u/g/o, not ACLs

2010-01-13 Thread James Bellinger

The following bug has been logged online:

Bug reference:  5275
Logged by:  James Bellinger
Email address:  j...@zer7.com
PostgreSQL version: 8.4.2
Operating system:   Ubuntu 9.10
Description:validate_exec in port/exec.c only reads u/g/o, not ACLs
Details: 

Howdy,

I'm not certain of the actual *purpose* for this function even checking in
the first place, but the result is that, if Postgres gets its access via an
ACL, it will say 'invalid binary' here and there, will not be able to find
its own executables, etc. I can see no purpose for this function.

That said, currently, the reason it gives these errors is that it only
checks user/group/other. Linux ACLs are not checked. If this function really
needs to exist as is, this ought to be fixed.

Thanks

James

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5276: pg_ctl reads data directory on -D instead of postgresql.conf directoryh

2010-01-14 Thread James Bellinger

The following bug has been logged online:

Bug reference:  5276
Logged by:  James Bellinger
Email address:  j...@zer7.com
PostgreSQL version: 8.4.2
Operating system:   Ubuntu 9.10
Description:pg_ctl reads data directory on -D instead of
postgresql.conf directoryh
Details: 

While on startup, pg_ctl (correctly) looks in the directory specified by -D
for postgresql.conf, and then uses that to determine the data directory,
when stopping it actually expects the directories to be the same and in
doing so looks for the PID file in the given directory. As a result, on
start one needs to use the .conf directory with -D, and on stop the data
directory.

It's not a big deal, but for consistency's sake it ought to read the .conf
file to get the data directory on stop.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Bogus reporting of non-null value in function call.

2000-04-24 Thread James Finch

Platform: Linux-2.2.12-20 (RH 6.1)
PostgreSQL: 7.0RC1

Description:
Arguments to a function seem to be incorrectly validated against constraints
on the table on which it operates. For example: I have a table that defines
one column (id) as a primary key, and also specifies the NOT NULL
constraint. I then have a function performs an insert into this table. The
function parameters basically map to the columns in this table. If I pass in
null in any argument to the function, I always get the spurious error
message "ERROR:  ExecAppend: Fail to add null value in not null attribute
...", even though I'm not passing in null for the parameter that maps to the
column to which the error message applies.


To reproduce.

CREATE TABLE atom (
   id   VARCHAR(256) NOT NULL,
   name VARCHAR(256),
   domain   VARCHAR(256),
   definition   VARCHAR(256),
   valueVARCHAR(4000),
   num_valueFLOAT8,
   date_value   DATE,
   objtype  INT2,
   PRIMARY KEY (id)
);



DROP FUNCTION createAtom( INT2, VARCHAR, VARCHAR, VARCHAR, VARCHAR,
VARCHAR );
CREATE FUNCTION createAtom( INT2, VARCHAR, VARCHAR, VARCHAR, VARCHAR,
VARCHAR )
RETURNS BOOL AS '
DECLARE
p_objtype ALIAS FOR $1;
p_idThis ALIAS FOR $2;
p_name ALIAS FOR $3;
p_domain ALIAS FOR $4;
p_definition ALIAS FOR $5;
p_value ALIAS FOR $6;
BEGIN
INSERT INTO atom ( objtype, id, name, domain, definition, 
num_value,
value )
VALUES ( p_objtype, p_idThis, p_name, p_domain, p_definition, 
p_value,
p_value ) ;
RETURN TRUE;
END;
' LANGUAGE 'plpgsql';

select createAtom( 'abc', 'Fred', 'NT', 'Fred', 'a', 1.0, null, 2 );




[BUGS] RE: Bogus reporting of non-null value in function call.

2000-04-24 Thread James Finch

Made a mistake in the bug report below. Last line should read something
like:

select createAtom( 1, 'abc', 'Fred', 'NT', 'a', null );


-Original Message-
From: James Finch [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 24, 2000 4:41 PM
To: '[EMAIL PROTECTED]'
Subject: Bogus reporting of non-null value in function call.


Platform: Linux-2.2.12-20 (RH 6.1)
PostgreSQL: 7.0RC1

Description:
Arguments to a function seem to be incorrectly validated against constraints
on the table on which it operates. For example: I have a table that defines
one column (id) as a primary key, and also specifies the NOT NULL
constraint. I then have a function performs an insert into this table. The
function parameters basically map to the columns in this table. If I pass in
null in any argument to the function, I always get the spurious error
message "ERROR:  ExecAppend: Fail to add null value in not null attribute
...", even though I'm not passing in null for the parameter that maps to the
column to which the error message applies.


To reproduce.

CREATE TABLE atom (
   id   VARCHAR(256) NOT NULL,
   name VARCHAR(256),
   domain   VARCHAR(256),
   definition   VARCHAR(256),
   valueVARCHAR(4000),
   num_valueFLOAT8,
   date_value   DATE,
   objtype  INT2,
   PRIMARY KEY (id)
);



DROP FUNCTION createAtom( INT2, VARCHAR, VARCHAR, VARCHAR, VARCHAR,
VARCHAR );
CREATE FUNCTION createAtom( INT2, VARCHAR, VARCHAR, VARCHAR, VARCHAR,
VARCHAR )
RETURNS BOOL AS '
DECLARE
p_objtype ALIAS FOR $1;
p_idThis ALIAS FOR $2;
p_name ALIAS FOR $3;
p_domain ALIAS FOR $4;
p_definition ALIAS FOR $5;
p_value ALIAS FOR $6;
BEGIN
INSERT INTO atom ( objtype, id, name, domain, definition, 
num_value,
value )
VALUES ( p_objtype, p_idThis, p_name, p_domain, p_definition, 
p_value,
p_value ) ;
RETURN TRUE;
END;
' LANGUAGE 'plpgsql';

select createAtom( 'abc', 'Fred', 'NT', 'Fred', 'a', 1.0, null, 2 );




[BUGS] pgsql 7.1: int4out() brokeness?

2001-01-10 Thread James Troup


POSTGRESQL BUG REPORT TEMPLATE



Your name   : James Troup
Your email address  : [EMAIL PROTECTED]


System Configuration
-
  Architecture (example: Intel Pentium) : Intel Pentium

  Operating System (example: Linux 2.0.26 ELF)  : Linux 2.4.0 ELF

  PostgreSQL version (example: PostgreSQL-7.0)  : PostgreSQL-7.1 snapshot (from 
2001-01-07 or -08)

  Compiler used (example:  gcc 2.8.0)   : gcc 2.95.2


Please enter a FULL description of your problem:


int4out() seems to be broken which in turn breaks ODBC.

With pgsql 7.1:

 | template1=# create table x (y int4); insert into x values (31); select y, int4out(y) 
 |from x;
 | CREATE
 | INSERT 34029 1
 |  y  |  int4out
 | +---
 |  31 | 136420312
 | (1 row)

With pgsql 7.0.3:

 | template1=> create table x (y int4); insert into x values (31); select y, int4out(y) 
 |from x;
 | CREATE
 | INSERT 35923 1
 |  y  | int4out
 | +-
 |  31 | 31
 | (1 row)

Please describe a way to repeat the problem.   Please try to provide a
concise reproducible example, if at all possible:
--

As above:

   CREATE TABLE x (y int4); INSERT INTO x VALUES (31); SELECT y, int4out(y) FROM x;

Both rows returned should (unless I'm missing something horribly
obvious) be 31.

If you know how this problem might be fixed, list the solution below:
-----

-- 
James



[BUGS] Potential bug

2002-04-19 Thread James Vinett

To whom it may concern:

I tried to create a new table with the following statement:

CREATE TABLE
"bit" (
provider_name varchar
);



I got the error message:

ERROR:  TypeCreate: type bit already defined



According to your documentation 'PosetgreSQL 7.1 User's Guide' on page 2
it states:

A delimited identifier (or qouted identifier) is always an identifier,
never a key word. So "select" could be used to refer to a column or
table named select, whereas an unquoted select would be taken as a key
word and would therefore provoke a paser error when used where a table
or column name is expected. 

It then goes further to state in 'PostgreSQL 7.1 Reference Manual' on
page 54 the syntax for CREATE TABLE:


CREATE [ TEMPORARY | TEMP ] TABLE table (
column type
[ NULL | NOT NULL ] [ UNIQUE ] [ DEFAULT value ]
[column_constraint_clause | PRIMARY KEY } [ ... ] ]
[, ... ]
[, PRIMARY KEY ( column [, ...] ) ]
[, CHECK ( condition ) ]
[, table_constraint_clause ]
) [ INHERITS ( inherited_table [, ...] ) ]


Is this exception that is being thrown correct?  If so you should state
that 'types' are excluded from the 'qouted identifier' rule.   Futher
more the exception makes it sound like I'm trying to CREATE TYPE, which
i am not.

james vinett
  






---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[BUGS] BUG #2895: Private Use Unicode character crashes server when using ILIKE

2007-01-16 Thread James Russell

The following bug has been logged online:

Bug reference:  2895
Logged by:  James Russell
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1.4
Operating system:   Linux (Fedora Core 5)
Description:Private Use Unicode character crashes server when using
ILIKE
Details: 

A UTF-8 text field contains (among others) the following Unicode character:
 0xf4808286 in UTF-8 (== 0x00100086 in UTF-32)

This corresponds to a character in the Unicode "Private Use" area, where the
codepoints are undefined and designated for proprietary uses
(0x10-0x10FFFD).

If a text field contains a such a character, and if the search term contains
certain Unicode characters, then queries using ILIKE will crash the server.

We discovered this bug when a user searched for rows containing the "TM"
character (UTF-8 0xE284A2 / UTF-32 0x2122), which translated to the
following query:

SELECT * FROM foo WHERE bar ILIKE('%™%');

If the rows searched contain the row with the Private Use character, then
the server crashes during the query and psql returns the following:

"psql:recreatebug.sql:8: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
psql:recreatebug.sql:8: connection to server was lost"

This bug is 100% reproducible on multiple servers with different
configurations. Our server configuration does not have any changes from
default.

The following PHP script will generate a psql script that can reproduce the
problem. Run this script to generate the SQL script, then run the SQL script
in psql. The server will crash when the SELECT statement is run.



[BUGS] BUG #2929: Error opening 5432 port

2007-01-24 Thread James Becerra

The following bug has been logged online:

Bug reference:  2929
Logged by:  James Becerra
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1
Operating system:   Windows 2003 Server
Description:Error opening 5432 port
Details: 

Hi,

Let me explaint my problem.

I usually work with postgres, but i dont know how it happend, this error
doesnt permit to run my php aplications.

could not connect to server: Connection refused (0x274D/10061) Is the
server running on host "127.0.0.1" and accepting TCP/IP connections on port
5432? 

i made a copy of postgres folder and i try to run it in other server and it
works good, i need run this postgres because i have two importants
aplications in this server.

please, i need help to solve this.

i hope for your answer soon.


Cordially


James Becerra.
Cali - Colombia

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [BUGS] BUG #2929: Error opening 5432 port

2007-01-26 Thread James Becerra
Good Morning,

I suppose it was the problem, but to solve this, I have disabled the windows
firewall.  I am worried because I have a server with windows 2003 server
running and sql server 2005 and IIS and PHP and Postgres 8.1, I tried to
connect locally but the problem is the same, I installed the same version of
Postgres in other computer and it works good, I made a copy of the folder on
my server and I replaced the new Postgres installation in the other
computer, and it the new computer still working good, but I couldn’t see the
data bases. I know the Postgres data files folder but it is invisible to new
installation. 

How I can solve this problem with this port. I stopped the Postgres service
in my server, and I started it again, I have restarted the server, I’ve
configured the Postgres configuration files manually, and disable the
firewall and the problem persists.

What happened with my server, time ago the Postgres 8.1 had worked perfect,
and 2 days ago it leaves.
I used the command “net stat –a” to see the ports and I can not find 5432
port.







James Humberto Becerra Ramirez 
Data Manager
ICIDR - Data Management Core 
[EMAIL PROTECTED] / [EMAIL PROTECTED] 
CIDEIM - Centro Internacional De Entrenamiento e Investigaciones Médicas 
www.cideim.org.co 


 WARNING!

Este mensaje y/o sus anexos son para uso exclusivo de CIDEIM. Puede contener
información legalmente protegida por ser privilegiada o confidencial. Si
usted no es el destinatario intencional del mensaje, por favor infórmenos de
inmediato y elimine el mensaje y su anexos de su computador y sistema de
comunicaciones. Igualmente, le comunicamos que cualquier retención, revisión
no autorizada, distribución, divulgación, reenvío, copia, impresión,
reproducción o uso indebido de este mensaje y/o anexos, esta estrictamente
prohibida y sancionada legalmente


-Original Message-
From: Stefan Kaltenbrunner [mailto:[EMAIL PROTECTED] 
Sent: Miércoles, 24 de Enero de 2007 03:59 p.m.
To: James Becerra
Cc: pgsql-bugs@postgresql.org
Subject: Re: [BUGS] BUG #2929: Error opening 5432 port

James Becerra wrote:
> The following bug has been logged online:
> 
> Bug reference:  2929
> Logged by:      James Becerra
> Email address:  [EMAIL PROTECTED]
> PostgreSQL version: 8.1
> Operating system:   Windows 2003 Server
> Description:Error opening 5432 port
> Details: 
> 
> Hi,
> 
> Let me explaint my problem.
> 
> I usually work with postgres, but i dont know how it happend, this error
> doesnt permit to run my php aplications.
> 
> could not connect to server: Connection refused (0x274D/10061) Is the
> server running on host "127.0.0.1" and accepting TCP/IP connections on
port
> 5432? 

well this simple means that you are trying to connect to localhost on
port 5432 and there is either nothing listening there or you have a
firewall blocking the request.
Have you checked that the postgresql service(from that error message I
assume you are running windows) is actually running and configured to
listen on that interface (it should be by default but somebody might
have changed that.
Another possibility would be some kind of firewall that is blocking the
request - if you have something like that you could try to disable it
temporary.


Stefan




-- 
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.410 / Virus Database: 268.17.8/649 - Release Date: 23/01/2007




---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[BUGS] ODBC driver crashes only when SQL_AUTOCOMMIT_OFF mode

2007-06-28 Thread Michael James

Hello,

I am using PostgreSQL 8.2.4 with the 8.02.03.00 ODBC driver (unicode
build), windows environment.

The program uses SQLBulkOperations to obtain from postgres new unique
identifiers for a table before I insert the data that goes in that
table. The program works and runs without error when the connection is
in the default (auto-commit) mode; setting SQL_AUTOCOMMIT_OFF causes
the program to crash dereferencing a NULL pointer internal to the ODBC
driver.

Following this message are excerpts code sufficient to replicate the
problem, and some diagnostic logs.

Thanks,
Michael James

CREATE TABLE CONCEPTS
( cid SERIAL,
 status INT,
 created DATE,
 modified DATE );

#define UNICODE
#define _UNICODE

#include 
#include 

#include 

#include 
#include 
#include 
#include 

struct sqlsession {
SQLHENV envhandle;
SQLHDBC dbhandle;
};

static int initsqlsession(struct sqlsession *sql) {
memset(sql, 0, sizeof *sql);
SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &sql->envhandle);
SQLSetEnvAttr(sql->envhandle, SQL_ATTR_ODBC_VERSION,
(SQLPOINTER)SQL_OV_ODBC3, 0);
SQLAllocHandle(SQL_HANDLE_DBC, sql->envhandle, &sql->dbhandle);
return 0;
}

static int odbc_open(char *config, struct sqlsession *sql) {
int x;

initsqlsession(sql);

x = SQLDriverConnectA( sql->dbhandle, 0, config, strlen(config),
NULL, 0, NULL, SQL_DRIVER_NOPROMPT);

switch (x) {
case SQL_SUCCESS_WITH_INFO:
case SQL_SUCCESS:
break;
default:
finisqlsession(sql);
exit(EXIT_FAILURE);
return 0;
}

#if 1  // change this to if 0 to see this code work correctly
x = SQLSetConnectAttr(sql->dbhandle, SQL_ATTR_AUTOCOMMIT,
SQL_AUTOCOMMIT_OFF, 0);
diagnose(SQL_HANDLE_DBC, sql->dbhandle);
#endif

return 1;
}

#define SET(attr, val) x=SQLSetStmtAttr(hstmt, SQL_ATTR_ ## attr,
(SQLPOINTER)(val), 0), diagnose(SQL_HANDLE_STMT, hstmt)

#define BIND(i, type, record, field) \
x = SQLBindCol(hstmt, (i), (type), &(record)->field, sizeof
(record)->field, &(record)->ind ## field), \
diagnose(SQL_HANDLE_STMT, hstmt)


static int odbc_getuniqueid(int isconcept, void *_) {
struct sqlsession *sql = _;
struct uidgen_row row;
SQLHSTMT hstmt;
SQLRETURN x;
SQLUSMALLINT rowstatus;

SQLAllocHandle(SQL_HANDLE_STMT, sql->dbhandle, &hstmt);

SET(CURSOR_TYPE,SQL_CURSOR_KEYSET_DRIVEN);
SET(USE_BOOKMARKS,  SQL_UB_VARIABLE);
SET(CONCURRENCY,SQL_CONCUR_LOCK);
SET(ROW_STATUS_PTR, &rowstatus);

SET(ROW_BIND_TYPE, sizeof (struct uidgen_row));
SET(ROW_ARRAY_SIZE, 1);

BIND(0, SQL_C_VARBOOKMARK, &row, bookmark);
BIND(1, SQL_C_ULONG,   &row, uid);
BIND(2, SQL_C_ULONG,   &row, status);

x = SQLExecDirect(hstmt, _T("SELECT cid, status FROM CONCEPTS WHERE
FALSE"), SQL_NTS);

x = SQLFetchScroll(hstmt, SQL_FETCH_FIRST, 0);

memset(&row, 0, sizeof row);

row.indbookmark = sizeof row.bookmark;
row.indstatus = sizeof row.status;
row.induid = SQL_COLUMN_IGNORE;
row.status = STATUS_NOTREADY;
x = SQLBulkOperations(hstmt, SQL_ADD);

row.induid = sizeof row.uid;
x = SQLBulkOperations(hstmt, SQL_FETCH_BY_BOOKMARK);  // ** CRASH HERE

SQLFreeHandle(SQL_HANDLE_STMT, hstmt);

return row.uid;
}

char *mydsn = "DSN=psql
local;DATABASE=bugtest;SERVER=localhost;PORT=5432;UID=mixe;PWD=somepass;SSLmode=disable;ReadOnly=0;Protocol=7.4;FakeOidIndex=0;ShowOidColumn=0;RowVersioning=0;ShowSystemTables=0;ConnSettings=;Fetch=100;Socket=4096;UnknownSizes=0;MaxVarcharSize=255;MaxLongVarcharSize=8190;Debug=1;CommLog=0;Optimizer=1;Ksqo=1;UseDeclareFetch=0;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsAsChar=1;Parse=0;CancelAsFreeStmt=0;ExtraSysTablePrefixes=dd_;;LFConversion=1;UpdatableCursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBinary=0;UseServerSidePrepare=0;LowerCaseIdentifier=0;XaOpt=1"


int main(void) {
   struct sqlsession sql;
   memset(&sql, 0, sizeof sql);
   odbc_open(mydsn, &sql);
   odbc_getuniqueid();
}

And, here are some excerpts from the debug log that the ODBC driver
creates, first when it works (with auto commit):

[3744-1.702][[SQLBulkOperations]] Handle=06169720 4
[3744-1.702]PGAPI_BulkOperations operation = 4
[3744-1.702]PGAPI_SetConnectOption: entering fOption = 102 vParam = 0
[3744-1.712]PGAPI_SetConnectOption: AUTOCOMMIT: transact_status=1, vparam=0
[3744-1.712]POS ADD fi=06169B08 ti=06169C80
[3744-1.712]PGAPI_AllocStmt: entering...
[3744-1.712] PGAPI_AllocStmt: hdbc = 06165A80, stmt = 0616A1B8
[3744-1.712]CC_add_statement: self=06165A80, stmt=0616A1B8
[3744-1.712]0 used=-6
[3744-1.712]1 used=4
[3744-1.7

[BUGS] BUG #3429: table named MULTIPOLYGON

2007-07-05 Thread James White

The following bug has been logged online:

Bug reference:  3429
Logged by:  James White
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.2.3
Operating system:   debian etch amd64
Description:table named MULTIPOLYGON
Details: 

I accidentally created a table name multipolygon with the typo:

ogr2ogr  -f "PostgreSQL" -nln sw_watersheds -nln MULTIPOLYGON 
PG:dbname=swgap USGSwatersheds.shp


Using the \dt command I get a line

 public | MULTIPOLYGON | table | postgres

However with this command 

swgap=# \d multipolygon
Did not find any relation named "multipolygon".
swgap=# \dt multipolygon
No matching relations found.

I cannot drop this table

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[BUGS] subselect on nonexistent column succeeds!!?!

2004-07-01 Thread James Robinson
Given:
orig_sav=# \d realtycompany_contacts
 Table "public.realtycompany_contacts"
  Column  |  Type  | Modifiers
--++---
 realtycompany_id | bigint | not null
 contact_id   | bigint | not null
and
orig_sav=# \d users
Table "public.users"
  Column   |Type | Modifiers
---+-+---
 id| bigint  | not null
 name  | text|
 password  | text| not null
 version   | integer | not null
 contact   | bigint  |
 comment   | text|
 organization  | bigint  |
 pwd_storage_style | integer |
 old_name  | text|
 deleted   | boolean | not null
 deleted_date  | timestamp without time zone |
Why in the world does this statement parse and run:
orig_sav=# delete from realtycompany_contacts where contact_id in 
(select contact_id from users);
DELETE 1634

Since users has *no* column contact_id ? One would expect the statement 
to fail, and the transaction to get rolled back.

orig_sav=# select contact_id from users;
ERROR:  column "contact_id" does not exist
This is on 7.4.2.

James Robinson
Socialserve.com
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [BUGS] subselect on nonexistent column succeeds!!?!

2004-07-01 Thread James Robinson
Aha. Well, you learn something new every day. Yes, the behavior is 
indeed like "contact_id is not null", which was true for all rows in 
users, which explains why I lost all my data in realtycompany_contacts.

Thank goodness for backups. Many thanks!
James
On Jul 1, 2004, at 3:35 PM, Stephan Szabo wrote:
AFAIK the spec requires that subselects like that can reference outer
columns so contact_id inside the subselect refers to the outer 
contact_id
column which does exist (which makes the effective behavior of the 
above
clause the same as contact_id is not null I think)


James Robinson
Socialserve.com
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[BUGS] BUG #1715: dbmirror replaces nextval, setval functions

2005-06-10 Thread James Doherty

The following bug has been logged online:

Bug reference:  1715
Logged by:  James Doherty
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.0.3
Operating system:   Solaris 8 Intel
Description:dbmirror replaces nextval, setval functions
Details: 

The dbmirror contrib program replaces some key functions when setting up the
Mirroring table (MirrorSetup.sql). The nextval and setval functions are
renamed as nextval_pg and setval_pg and replaced with new functions. 

This breaks any fields made with the SERIAL datatype, which have to be
ALTER'ed to use the nextval_pg and setval_pg functions to work properly.


Here's the offending code in MirrorSetup.sql:
UPDATE pg_proc SET proname='nextval_pg' WHERE proname='nextval';

CREATE FUNCTION pg_catalog.nextval(text) RETURNS int8  AS
'$libdir/pending.so', 'nextval' LANGUAGE 'C' STRICT;


UPDATE pg_proc set proname='setval_pg' WHERE proname='setval';

CREATE FUNCTION pg_catalog.setval("unknown",integer,boolean) RETURNS int8 
AS
'$libdir/pending.so', 'setval' LANGUAGE 'C' STRICT;
CREATE FUNCTION pg_catalog.setval("unknown",integer) RETURNS int8  AS
'$libdir/pending.so', 'setval' LANGUAGE 'C' STRICT;

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[BUGS] BUG #1713: dbmirror replaces nextval, setval functions

2005-06-12 Thread James Doherty

The following bug has been logged online:

Bug reference:  1713
Logged by:  James Doherty
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.0.3
Operating system:   Solaris 8 Intel
Description:dbmirror replaces nextval, setval functions
Details: 

The dbmirror contrib program replaces some key functions when setting up the
Mirroring table (MirrorSetup.sql). The nextval and setval functions are
renamed as nextval_pg and setval_pg and replaced with new functions. 

This breaks any fields made with the SERIAL datatype, which have to be
ALTER'ed to use the nextval_pg and setval_pg functions to work properly.


Here's the offending code in MirrorSetup.sql:
UPDATE pg_proc SET proname='nextval_pg' WHERE proname='nextval';

CREATE FUNCTION pg_catalog.nextval(text) RETURNS int8  AS
'$libdir/pending.so', 'nextval' LANGUAGE 'C' STRICT;


UPDATE pg_proc set proname='setval_pg' WHERE proname='setval';

CREATE FUNCTION pg_catalog.setval("unknown",integer,boolean) RETURNS int8 
AS
'$libdir/pending.so', 'setval' LANGUAGE 'C' STRICT;
CREATE FUNCTION pg_catalog.setval("unknown",integer) RETURNS int8  AS
'$libdir/pending.so', 'setval' LANGUAGE 'C' STRICT;

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[BUGS] BUG #1714: dbmirror replaces nextval, setval functions

2005-06-12 Thread James Doherty

The following bug has been logged online:

Bug reference:  1714
Logged by:  James Doherty
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.0.3
Operating system:   Solaris 8 Intel
Description:dbmirror replaces nextval, setval functions
Details: 

The dbmirror contrib program replaces some key functions when setting up the
Mirroring table (MirrorSetup.sql). The nextval and setval functions are
renamed as nextval_pg and setval_pg and replaced with new functions. 

This breaks any fields made with the SERIAL datatype, which have to be
ALTER'ed to use the nextval_pg and setval_pg functions to work properly.


Here's the offending code in MirrorSetup.sql:
UPDATE pg_proc SET proname='nextval_pg' WHERE proname='nextval';

CREATE FUNCTION pg_catalog.nextval(text) RETURNS int8  AS
'$libdir/pending.so', 'nextval' LANGUAGE 'C' STRICT;


UPDATE pg_proc set proname='setval_pg' WHERE proname='setval';

CREATE FUNCTION pg_catalog.setval("unknown",integer,boolean) RETURNS int8 
AS
'$libdir/pending.so', 'setval' LANGUAGE 'C' STRICT;
CREATE FUNCTION pg_catalog.setval("unknown",integer) RETURNS int8  AS
'$libdir/pending.so', 'setval' LANGUAGE 'C' STRICT;

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [BUGS] BUG #1715: dbmirror replaces nextval, setval functions

2005-06-13 Thread James Doherty
Title: Re: [BUGS] BUG #1715: dbmirror replaces nextval, setval functions





I wouldn't say that requiring that dbmirror not break all serial 
fields in the entire database is a special requirement!
 
The pending.so library that is compiled prior to running dbmirror would 
also have to be updated, since it refers to nextval and setval as well.
 
As far as the actual mirroring of the database, it's working great. It's 
the way that it changes the database when it is intalled that I think it is a 
bug. And I could attempt to fix it with my meager knowledge of postgres and c, 
but I don't think I'd be confident to submit an official fix for it. 
 
James

  -Original Message- From: Achilleus 
  Mantzios [mailto:[EMAIL PROTECTED] Sent: Mon 6/13/2005 
  2:59 AM To: James Doherty Cc: pgsql-bugs@postgresql.org 
  Subject: Re: [BUGS] BUG #1715: dbmirror replaces nextval, setval 
  functions
  O James Doherty έγραψε στις Jun 10, 2005 :>> 
  The following bug has been logged online:>> Bug 
  reference:      1715> Logged 
  by:  James 
  Doherty> Email address:  
  [EMAIL PROTECTED]> PostgreSQL version: 8.0.3> Operating 
  system:   Solaris 8 Intel> 
  Description:    dbmirror replaces 
  nextval, setval functions> Details:>> The dbmirror 
  contrib program replaces some key functions when setting up the> 
  Mirroring table (MirrorSetup.sql). The nextval and setval functions 
  are> renamed as nextval_pg and setval_pg and replaced with new 
  functions.>> This breaks any fields made with the SERIAL 
  datatype, which have to be> ALTER'ed to use the nextval_pg and 
  setval_pg functions to work properly.>>> Here's the 
  offending code in MirrorSetup.sql:> UPDATE pg_proc SET 
  proname='nextval_pg' WHERE proname='nextval';>> CREATE FUNCTION 
  pg_catalog.nextval(text) RETURNS int8  AS> '$libdir/pending.so', 
  'nextval' LANGUAGE 'C' STRICT;>>> UPDATE pg_proc set 
  proname='setval_pg' WHERE proname='setval';>> CREATE FUNCTION 
  pg_catalog.setval("unknown",integer,boolean) RETURNS int8> AS> 
  '$libdir/pending.so', 'setval' LANGUAGE 'C' STRICT;> CREATE FUNCTION 
  pg_catalog.setval("unknown",integer) RETURNS int8  AS> 
  '$libdir/pending.so', 'setval' LANGUAGE 'C' STRICT;You are right, 
  since dbmirror mirrors tables based on whetherthe recordchange trigger is 
  present, whereas it seemsto mirror sequences unconditionally.You 
  could rename dbmirror's setval,nextval todbmirror_setval,dbmirror_nextval, 
  and leavepostgresql's original functions as is.Just a note 
  tho:Dbmirror is a great tool, but someone needs to hack a great dealof 
  code to meet his/her special 
  requirements.---Achilleus




[BUGS] BUG #1862: ECPG Connect, host variable trailing blanks

2005-09-06 Thread James Gray

The following bug has been logged online:

Bug reference:  1862
Logged by:  James Gray
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.0.3
Operating system:   Red Hat Enterprise Linux AS release 4 (Nahant Update 1),
Kernel 2.6.9-11.EL on an ia64
Description:ECPG Connect, host variable trailing blanks
Details: 

We are processing a COBOL/SQL program, and feeding the results to ECPG as
C/SQL
(since Postgres does not yet have a Cobol/Sql preprocessor).

The problem that we are having involves a connect statement with host
variables:
   EXEC SQL  CONNECT TO  :target  AS  :user

Our problem is that we are passed Cobol strings which are blank padded. 
Our string strategy works fine for Oracle, but not for Postgres CONNECTs.

For example, if we are trying to connect to:
 - database:  demo
 - user:  scott
 - password:  tiger 
the strings must be "demo", "scott" and "tiger".

With trailing blanks user "scott" will not match user "scott ",
which is what we will present if the user had defined the Cobol variable as
PIC X(10).

There does not seem to be an option for Postgres to ignore trailing blanks 
in the CONNECT host variables. 

This only applies to CONNECT host variables, since trailing blanks in a 
CHAR column are ignored in comparisons for all other interactions with
Postgres.

Since this is inconsistent behavior, and also doesn't match Oracle's
behavior,
we are requesting a fix or an option.

---(end of broadcast)---
TIP 6: explain analyze is your friend


[BUGS] Incorrect column identifer using AS in SELECT statement on a VIEW.

2005-11-22 Thread Hayden James
If I create a normal table and a normal view that queries that table I
get incorrect results when I query the view using the AS alias in the
select statement.  For example, if I create the following objects:

CREATE TABLE Test1( col1 VARCHAR(200) );
CREATE VIEW Test2 AS SELECT col1 FROM Test1 ORDER BY col1;

then I do a SELECT col1 AS something FROM Test2;  The column identifier comes back as "col1" instead of "something".




Re: [BUGS] Incorrect column identifer using AS in SELECT statement on a VIEW.

2005-11-22 Thread Hayden James
Also forgot to mention that this only started occuring in PostgreSQL 8.1. Here is my SELECT version() output:

   
version 
---
 PostgreSQL 8.1.0 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.0.1 20050727 (Red Hat 4.0.1-5)
On 11/23/05, Hayden James <[EMAIL PROTECTED]> wrote:
If I create a normal table and a normal view that queries that table I
get incorrect results when I query the view using the AS alias in the
select statement.  For example, if I create the following objects:

CREATE TABLE Test1( col1 VARCHAR(200) );
CREATE VIEW Test2 AS SELECT col1 FROM Test1 ORDER BY col1;

then I do a SELECT col1 AS something FROM Test2;  The column identifier comes back as "col1" instead of "something".






[BUGS] BUG #2258: debug_print_plan doesn't work for cursors

2006-02-13 Thread James Gray

The following bug has been logged online:

Bug reference:  2258
Logged by:  James Gray
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1.2
Operating system:   Linux/Windows
Description:debug_print_plan doesn't work for cursors
Details: 

Feature request: Allow debug_print_plan to produce log output for DECLARE
CURSOR statements (or on the first FETCH). The TODO FAQ does not list this
feature, nor does the mail archives discuss it. Looking at the Postgres
source for pg_plan_query() shows that DECLARE CURSOR is considered a utility
command, and currently does not produce log output for debug_print_plan.
However, EXPLAIN does allow a DECLARE CURSOR statement using a further
statement type check. We have a large customer/application with performance
problems related to a few of a large number of embedded cursors, and this
feature would help isolate the problem more rapidly. The cursors are all of
the type for which SPI_IS_CURSOR_PLAN would return TRUE.  We could do a
Postgres source change ourselves, but we are not sure that it is as simple
as adding: if (Debug_print_plan) elog_node_display(DEBUG1, "plan", plan,
debug_pretty_print)  after the call to planner() in the routine
PerformCursorOpen().

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[BUGS] BUG #2268: initdb fails to initialize database cluster with error "Access is denied"

2006-02-16 Thread James Hughes

The following bug has been logged online:

Bug reference:  2268
Logged by:  James Hughes
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1.3
Operating system:   Windows 2003 (Server)
Description:initdb fails to initialize database cluster with error
"Access is denied"
Details: 

When we try and initialize a database cluster using initdb.exe we are
getting the following output.

-- OUTPUT from dbinit.exe --
The files belonging to this database system will be owned by user "user1".
This user must also own the server process.

The database cluster will be initialized with locale English_United
Kingdom.1252.

creating directory c:/dataa ... ok
creating directory c:/dataa/global ... ok
creating directory c:/dataa/pg_xlog ... ok
creating directory c:/dataa/pg_xlog/archive_status ... ok
creating directory c:/dataa/pg_clog ... ok
creating directory c:/dataa/pg_subtrans ... ok
creating directory c:/dataa/pg_twophase ... ok
creating directory c:/dataa/pg_multixact/members ... ok
creating directory c:/dataa/pg_multixact/offsets ... ok
creating directory c:/dataa/base ... ok
creating directory c:/dataa/base/1 ... ok
creating directory c:/dataa/pg_tblspc ... ok
selecting default max_connections ... Access is denied.
Access is denied.
Access is denied.
Access is denied.
Access is denied.
Access is denied.
10
selecting default shared_buffers ... Access is denied.
Access is denied.
Access is denied.
Access is denied.
Access is denied.
Access is denied.
Access is denied.
Access is denied.
Access is denied.
Access is denied.
Access is denied.
50
creating configuration files ... ok
creating template1 database in c:/dataa/base/1 ... ok
initializing pg_authid ... Access is denied.
child process was terminated by signal 1
initdb: removing data directory "c:/dataa"

-- end: OUTPUT from dbinit.exe --

user1 is a user without administrative rights. I have checked all the
directories and the user has permissons to write to the data directory, and
read other files it requires.

I also tried the Windows installer (MSI) from pgFoundry. This too is having
the same problem.

We have two machines which this problem occurs on, one of which originally
the initialization would work on. I have checked all the user rights
asssignments, and even given the postgres user (user1 in the above output
from dbinit.exe) all user rights.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [BUGS] BUG #6413: pg_relation_size wont work on table with upper case chars

2012-01-30 Thread James Stevenson
That seems to work.

thanks

-Original Message-
From: Heikki Linnakangas [mailto:heikki.linnakan...@enterprisedb.com] 
Sent: 28 January 2012 19:34
To: James Stevenson
Cc: pgsql-bugs@postgresql.org
Subject: Re: [BUGS] BUG #6413: pg_relation_size wont work on table with upper 
case chars

On 28.01.2012 18:16, ja...@stev.org wrote:
> I suspect pg_relation_size has a tolower in it to a table name called
> Attempts will not work with pg_relation_size
>
> Both
>
> pg_relation_size('Attempts')
> and
> pg_relation_size('attempts')
>
> fails with no such table.
>
> After table rename from Attempts to attempts it will work.

Try double-quoting the table name, within the single-quotes: 
pg_relation_size('"Attempts"').

This is explained in the context of nextval() function at:

http://www.postgresql.org/docs/9.1/static/functions-sequence.html

Looks like the manual page on pg_relation_size(), or "Object Identifier 
Types" where the regclass datatype is otherwise explained, don't mention 
anything about the double-quoting. Patches to improve that are welcome..

-- 
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com

-
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2012.0.1901 / Virus Database: 2109/4763 - Release Date: 01/24/12

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #2489: Metadata dosen't match tables

2006-06-21 Thread James A Cole

The following bug has been logged online:

Bug reference:  2489
Logged by:  James A Cole
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 7.4.13
Operating system:   Red Hat Enterprise Linux 4
Description:Metadata dosen't match tables
Details: 

PostgreSQL 7.4.13 on i386-redhat-linux-gnu, compiled by GCC
i386-redhat-linux-gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-2)

mkeB_reports=# \d arrivals
Did not find any relation named "arrivals".

mkeB_reports=# select * from arrivals where ontime>'2006-6-21';
 ontime | flightid | airline | runway | upline | downline |
acty
pe |  sta   | beacon |  icao24  |   tail   |   term   |  
gate  
 | etatype |   cetatime
+--+-+++--+-

---+++--+--+--+-

-+-+---
 2006-06-21 00:16:52-04 | ACT301   | ACT | 01L| MDW| MKE  |
PA31
   |||  |  |  | 
   
 |   8 | 2006-06-21 00:18:29.538826-04
 2006-06-21 01:04:08-04 | USC361   | USC | 01L| MDW| MKE  |
C208
   |||  |  |  | 
   
 |   8 | 2006-06-21 01:05:53.170079-04



There are many other databases on this server.  Most
do not have this problem.
Jim Cole

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[BUGS] BUG #3296: CASCADING FKs dont DEFER

2007-05-22 Thread S. Robert James

The following bug has been logged online:

Bug reference:  3296
Logged by:  S. Robert James
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1
Operating system:   Linux
Description:CASCADING FKs dont DEFER
Details: 

If a FK is set to ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, it does
not honor the defer.

That is, if you DELETE a record and then INSERT it back, you still get a FK
deletion of the referencing record.  This seems to ignore DEFERRED.

I did not seem to get this on 8.2 / Windows

---(end of broadcast)---
TIP 6: explain analyze is your friend


[BUGS] BUG #3532: Can't rollup array of arrays

2007-08-10 Thread James William Pye

The following bug has been logged online:

Bug reference:  3532
Logged by:  James William Pye
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.2
Operating system:   FreeBSD
Description:Can't rollup array of arrays
Details: 

Dunno about the spec, but I would think this should work:

[EMAIL PROTECTED]/tinman[]=# SELECT array(select array[table_name,
column_name] FROM information_schema.columns LIMIT 10);
ERROR:  could not find array type for datatype character varying[]

[EMAIL PROTECTED]/tinman[]=# SELECT version();
version 
   


 PostgreSQL 8.2.4 on i386-portbld-freebsd6.2, compiled by GCC cc (GCC) 3.4.6
[FreeBSD] 20060305
(1 row)

The expectation is the production of an array like:
'{{table_foo,column_bar},{table_foo,column_bar2},...}'.

No? (yeah, it may be more of a feature request than a bug)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[BUGS] BUG #3996: Reinstalling after uninstall

2008-02-28 Thread James P. Yalem

The following bug has been logged online:

Bug reference:  3996
Logged by:  James P. Yalem
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.2 or 8.3
Operating system:   Windows XP
Description:Reinstalling after uninstall
Details: 

When I try to reinstall 8.3 or 8.2 after uninstalling 8.2, I get an error
message that the user postgres already exists and the installation program
shuts down.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [BUGS] BUG #2240: length() with geometric types

2006-02-06 Thread James William Pye
On Mon, Feb 06, 2006 at 02:41:39PM +, Andreas Erber wrote:
> Hi,
> 
> I discovered some strange behaviour:
> The length() function returns different results depending on the geometric
> data type used as argument.
> 
> length(lseg) produces the correct result, i.e. length(lseg('(0,0),(2,0)')) =
> 2
> 
> length(path) always produces the double result (independently from the
> length of the path), i.e.
> length(path('(0,0),(2,0)')) = 4
> length(path('(0,0),(2,0),(4,0),(6,0),(8,0)')) = 16
> 
> Is it supposed to be that way? If yes, why?

Yes.

You specified your path as a closed path. With its "loopback", it's twice as
long.

To specify it as an open path, do path('[(0,0),(2,0)]').

SELECT length(path('[(1,0),(0,0)]')) = '1';
SELECT length(path('(1,0),(0,0)')) = '2';
-- 
Regards, James William Pye

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[BUGS] BUG #2389: function within function return value

2006-04-13 Thread James M Doherty

The following bug has been logged online:

Bug reference:  2389
Logged by:  James M Doherty
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1
Operating system:   RH9
Description:function within function return value
Details: 

reate or replace function get_glaccttotals(text,integer,text,text) returns
float
as '
declare
RECORD_DATE alias for $1;
BANKID  alias for $2;
BEG_GL_RANGEalias for $3;
END_GL_RANGEalias for $4;
arecrecord;
grecrecord;
brecrecord;
total_due   numeric := 0;
total   numeric := 0;
BEGIN
  total_due   := 0;
  total   := 0;
  --=
  -- now for Total Cash & Due
  --=
  for arec in select * from amggenledger ag
   where (ag.amg_gl_nbr between BEG_GL_RANGE
 and END_GL_RANGE)
  loop
for grec in select * from genledger g
 where g.amg_gl_nbr = arec.id
   and g.bank_id = BANKID
loop

select into total sum(bbs.bbs_current_balance)
  from bank_balance_sheet as bbs
 where bbs.bank_id = BANKID
   and grec.id = bbs.bbs_bank_acct_nbr
   and date_eq(bbs.record_date,date(RECORD_DATE));

 --
 -- the select got us the total for this invidual
 -- account we not need to keep track of the total
 -- so we know what to return from all accounts
--
  raise NOTICE ''[0]get_accttotals() -TOTAL DUE(%)
total(%)'',total_due,total;
 total_due := total_due + total;
end loop; --END OF for grec in select *
  end loop;  --END OF for arec in select * from amggenledger ag
  raise NOTICE ''[1]get_accttotals() -TOTAL DUE(%)'',total_due;
  RETURN total_due;
END;
'  language 'plpgsql';

The above function is called as follows:
trec.tot_value   :=
get_glaccttotals(RECORD_DATE,BANKID,''A50'',''A500299'');

The result is always null. When called on its own via: select * from 
get_glaccttotals(RECORD_DATE,BANKID,''A50'',''A500299'');

it returns the correct value:

 get_glaccttotals
--
5234938.4
(1 row)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[BUGS] BUG #7611: \copy (and COPY?) incorrectly parses nul character for windows-1252

2012-10-18 Thread sams . james+postgres
The following bug has been logged on the website:

Bug reference:  7611
Logged by:  James
Email address:  sams.james+postg...@gmail.com
PostgreSQL version: 9.1.6
Operating system:   Ubuntu Linux 12.04
Description:

I have a file with several nul characters in it. The file itself appears to
be encoded as windows-1252, though I am not 100% certain of that. I do know
that other software (e.g. Python) can decode the data as windows-1252
without issue. Postgres's \copy, however, chokes on the nul byte:

ERROR:  unterminated CSV quoted field
CONTEXT:  COPY promo_nonactive_load_fake, line 239900

Note that the error is wrong, the field is quoted but postgres seems to jump
forward in the file when it encounters the nul bytes.

Further, the line number is wrong. That is the length of the file (in
lines), not the line on which the error occurs, which is several hundred
lines before this.

Deleting the nul byte characters allowed copy to proceed normally. I
experienced similar issues with psycopg2 and copy_expert using COPY FROM
STDIN and this file.



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #7853: Incorrect statistics in table with many dead rows.

2013-02-05 Thread James R Skaggs
Kevin -

Agreed, we shouldn't have so many dead rows.  Our autovacuum is set on but
has default parameters.   We are clustering today.  This, of course, means
downtime and inconvenience to the users.

Here is the troublesome query:

select

sum(case when t1.cdlc < 0 then 0 else t1.cdlc end) as code_line_cnt,

sum(case when t1.cmlc < 0 then 0 else t1.cmlc end) as comment_line_cnt,

sum(case when t1.bllc < 0 then 0 else t1.bllc end) as blank_line_cnt

  from

( select

max(sf.current_code_line_count) as cdlc,

max(sf.current_comment_line_count) as cmlc,

max(sf.current_blank_line_count) as bllc

  from

stream_file sf

  group by sf.file_path_id, sf.current_source_md5

) as t1;


And the explain plan.  Against a optimized database, the plan was better,
as well as execution

Aggregate  (cost=32073953.42..32073953.43 rows=1 width=12) (actual
time=77394.354..77394.354 rows=1 loops=1)
  ->  GroupAggregate  (cost=29271854.25..31808889.97 rows=15146482
width=53) (actual time=62490.514..77333.105 rows=168591 loops=1)
->  Sort  (cost=29271854.25..29650516.30 rows=151464819 width=53)
(actual time=62490.492..73098.009 rows=14403547 loops=1)
  Sort Key: sf.file_path_id, sf.current_source_md5
  Sort Method:  external merge  Disk: 942440kB
  ->  Seq Scan on stream_file sf  (cost=0.00..3514999.19
rows=151464819 width=53) (actual time=0.327..20620.230 rows=14403547
loops=1)
Total runtime: 77836.949 ms

Thanks and Regards,

James Skaggs
IT/FIS Longmont
SeaTel: 8 684 1048
General: +1 612 367 6224


On Tue, Feb 5, 2013 at 3:00 PM, Kevin Grittner  wrote:

> "jim...@seagate.com"  wrote:
>
> > INFO:  analyzing "public.stream_file"
> > INFO:  "stream_file": scanned 3 of 2123642 pages, containing
> > 184517 live rows and 2115512 dead rows; 3 rows in sample,
> > 158702435 estimated total rows
>
> 184517 live rows in 3 randomly sampled pages out of 2123642
> total pages, means that the statistics predict that a select
> count(*) will find about  13 million live rows to count.
>
> > After "analyze verbose", the table shows 158 million rows. A
> > select count(1) yields 13.8 million rows.
>
> OK, the estimate was 13 million and there were actually 13.8
> million, but it is a random sample used to generate estimates.
> That seems worse than average, but close enough to be useful.
> The 158.7 million total rows includes dead rows, which must be
> visited to determine visibility, but will not be counted because
> they are not visible to the counting transaction.  Having over 90%
> of your table filled with dead rows is a bad situation to be in,
> from a performance standpoint.  You should use aggressive
> maintenance (like VACUUM FULL or CLUSTER) to fix the existing
> extreme bloat, and then review your autovacuum settings and overall
> vacuum regimen to prevent future bloat.
>
> This does not look like a bug from the information provided so far.
>
> -Kevin
>


Re: [BUGS] BUG #7853: Incorrect statistics in table with many dead rows.

2013-02-06 Thread James R Skaggs
Kevin -

Here is the plan *after *we clustered on the PK index..it was probably that
way before the bloat, but I don't know.

Aggregate  (cost=514557.38..514557.40 rows=1 width=12) (actual
time=8823.973..8823.974 rows=1 loops=1)
  ->  HashAggregate  (cost=471029.03..492793.20 rows=1243667 width=53)
(actual time=8725.324..8789.926 rows=159369 loops=1)"
->  Seq Scan on stream_file sf  (cost=0.00..315570.68 rows=12436668
width=53) (actual time=0.013..1152.600 rows=12436753 loops=1)
Total runtime: 8838.395 ms"




I guess the statistics should reflect the true costs.  So if the "bloated"
statistics are representative of the true cost, I'm all for them.

We are clustering the database as we speak, so I can't force a hash
aggregate to see if that is the right choice or not.

Thanks and Regards,

James Skaggs
IT/FIS Longmont
SeaTel: 8 684 1048
General: +1 612 367 6224


On Tue, Feb 5, 2013 at 4:12 PM, Kevin Grittner  wrote:

> James R Skaggs  wrote:
>
> > Agreed, we shouldn't have so many dead rows.  Our autovacuum is
> > set on but has default parameters.   We are clustering today.
> > This, of course, means downtime and inconvenience to the users.
>
> Right, which is why it's important to figure out why the bloat
> happened.  Sometimes it is unavoidable, like when you delete 90% of
> the rows in your table or a long-lived "idle in transaction"
> connection prevents autovacuum from being able to do its work
> normally.  To prevent further downtime it is important to figure
> out what happened and make appropriate changes to your monitoring
> or vacuuming.
>
> > Here is the troublesome query:
> >
> >> select
> >>sum(case when t1.cdlc < 0 then 0 else t1.cdlc end) as code_line_cnt,
> >>sum(case when t1.cmlc < 0 then 0 else t1.cmlc end) as
> comment_line_cnt,
> >>sum(case when t1.bllc < 0 then 0 else t1.bllc end) as blank_line_cnt
> >>  from
> >>( select
> >>max(sf.current_code_line_count) as cdlc,
> >>max(sf.current_comment_line_count) as cmlc,
> >>max(sf.current_blank_line_count) as bllc
> >>  from
> >>stream_file sf
> >>  group by sf.file_path_id, sf.current_source_md5
> >>) as t1;
>
> Well, I don't see that the planner has a lot of choice there
> besides whether to use a sort or a hash to do the inner
> aggregation.  Are you saying that prior to the bloat it used a hash
> aggregation, and that was faster?  And that you feel that it should
> be using that even with the bloat?  That the dead rows seem to be
> getting included in the statistics, driving to the slower plan, and
> you feel they should be omitted?
>
> Note that I'm not aruing one way or another on these points at the
> moment; I'm just trying to understand your point clearly.
>
> -Kevin
>


Re: [BUGS] BUG #7853: Incorrect statistics in table with many dead rows.

2013-02-23 Thread James R Skaggs
Okay, I have some more info.

Some background info.  This one table gets so many changes, I CLUSTER it
each night.  However, after I do this. The statistics still appear to be
incorrect.  Even after I do a "select pg_stat_reset();" Followed by 3
ANALYZE at default_statistics_target as 1, 10, and 100

select relname, n_live_tup, n_dead_tup, n_tup_ins, n_tup_del,
n_tup_hot_upd
from pg_stat_all_tables
('stream_file', 109061143L, 103913868L, 0L, 14201L, 0L)

Is it possible that there are still dead tuples after a CLUSTER?

Explain Analyze thinks we have 112M rows (in fact there are 10M), so it
needs do do a disc sort with work_mem=1GB..

"Aggregate  (cost=23622814.39..23622814.40 rows=1 width=12) (actual
time=57512.462..57512.462 rows=1 loops=1)"
"  ->  GroupAggregate  (cost=21536989.70..23425506.64 rows=11274728
width=53) (actual time=50825.396..57457.421 rows=202394 loops=1)"
"Output: max(sf.current_code_line_count),
max(sf.current_comment_line_count), max(sf.current_blank_line_count)"
"->  Sort  (cost=21536989.70..21818857.90 rows=112747280 width=53)
(actual time=50825.386..55004.916 rows=7683730 loops=1)"
"  Output: sf.current_code_line_count,
sf.current_comment_line_count, sf.current_blank_line_count,
sf.file_path_id, sf.current_source_md5"
"  Sort Key: sf.file_path_id, sf.current_source_md5"
"  Sort Method:  external merge  Disk: 502288kB"
"  ->  Seq Scan on stream_file sf  (cost=0.00..2604208.80
rows=112747280 width=53) (actual time=0.033..27922.485 rows=7683730
loops=1)"
"Output: sf.current_code_line_count,
sf.current_comment_line_count, sf.current_blank_line_count,
sf.file_path_id, sf.current_source_md5"
"Total runtime: 57693.835 ms"

Now we do many, many ANALYZE VERBOSE, and converge on the correct value,
which is known to be about 10M rows.

INFO:  analyzing "public.stream_file"
INFO:  "stream_file": scanned 3 of 1476736 pages, containing 158846
live rows and 2175512 dead rows; 3 rows in sample, 112747282 estimated
total rows
Query returned successfully with no result in 9172 ms.

... 200X !...

INFO:  analyzing "public.stream_file"
INFO:  "stream_file": scanned 3 of 1480611 pages, containing 158776
live rows and 2170410 dead rows; 3 rows in sample, 9769236 estimated
total rows
Query returned successfully with no result in 441 ms.

Now, the optimizer thinks we can do a has aggregate in memory and we get
better performance.

"Aggregate  (cost=1734729.12..1734729.14 rows=1 width=12) (actual
time=33816.049..33816.049 rows=1 loops=1)"
"  ->  HashAggregate  (cost=1700534.50..1717631.81 rows=976989 width=53)
(actual time=33535.083..33712.787 rows=202404 loops=1)"
"Output: max(sf.current_code_line_count),
max(sf.current_comment_line_count), max(sf.current_blank_line_count)"
"->  Seq Scan on stream_file sf  (cost=0.00..1578410.89
rows=9769889 width=53) (actual time=392.435..26278.143 rows=7710223
loops=1)"
"  Output: sf.id, sf.current_blank_line_count,
sf.current_code_line_count, sf.current_comment_line_count,
sf.current_source_md5, sf.component_id, sf.current_file_instance_id,
sf.current_file_state_id, sf.file_path_id, sf.stream_element_id"
"Total runtime: 33822.707 ms"

But later in the day, the statistics revert back to the 100M number!  Any
ideas?  Is there some kind of cache that is remembering the old statistics.?


Thanks and Regards,

James Skaggs
IT/FIS Longmont
SeaTel: 8 684 1048
General: +1 612 367 6224


On Sun, Feb 10, 2013 at 1:10 PM, Jeff Janes  wrote:

> On Tue, Feb 5, 2013 at 2:00 PM, Kevin Grittner  wrote:
> > "jim...@seagate.com"  wrote:
> >
> >> INFO:  analyzing "public.stream_file"
> >> INFO:  "stream_file": scanned 3 of 2123642 pages, containing
> >> 184517 live rows and 2115512 dead rows; 3 rows in sample,
> >> 158702435 estimated total rows
> >
> > 184517 live rows in 3 randomly sampled pages out of 2123642
> > total pages, means that the statistics predict that a select
> > count(*) will find about  13 million live rows to count.
> >
> >> After "analyze verbose", the table shows 158 million rows. A
> >> select count(1) yields 13.8 million rows.
> >
> > OK, the estimate was 13 million and there were actually 13.8
> > million, but it is a random sample used to generate estimates.
> > That seems worse than average, but close enough to be useful.
> > The 158.7 million total rows includes dead rows, which must be
> > visited to determine visibility, but will not be counted because
> > they are not visible to the counting t

[BUGS] BUG #8238: duplicate of bug #6372 on panffs

2013-06-18 Thread james . e . hughes
The following bug has been logged on the website:

Bug reference:  8238
Logged by:  Jim Hughes
Email address:  james.e.hug...@boeing.com
PostgreSQL version: 9.1.4
Operating system:   SLES 11
Description:

Upon creating a database on a panfs (Panasas proprietary fs) directory I
get:


createdb: database creation failed: ERROR:  could not fsync file
"base/16387": Invalid argument


This is a duplicate of bug #6372 and is fixed by applying the patch for
copydir.c listed in the thread.  So we confirm this problem also exists on
another fs in addition to CIFS.






-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs