[BUGS] BUG #4290: wrong double subselect with aggregate function

2008-07-09 Thread Andreas

The following bug has been logged online:

Bug reference:  4290
Logged by:  Andreas
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.3.3
Operating system:   Linux
Description:wrong double subselect with aggregate function
Details: 

select version() ;
 version

--
 PostgreSQL 8.3.3 on i486-pc-linux-gnu, compiled by GCC cc (Debian 4.3.1-1)
4.3.1

create table a (
  id serial primary key
) ;

create table b (
  id serial primary key
  , aid int not null references a
) ;

create table c (
  id serial primary key
  , aid int not null references a
) ;

create table d (
  id serial primary key
  , bid int not null references b
  , cid int references b
) ;

insert into a ( id ) values ( default ) , ( default ) ;
insert into b ( aid ) values ( 1 ) , ( 2 ) ;
insert into c ( aid ) values ( 1 ) , ( 1 ) , ( 2 ) , ( 2 ) ;
insert into d ( bid ) values ( 1 ) , ( 2 ) ;
select
 ( select min( c.id ) from c where c.aid = ( select b.aid from b where b.id
= d.bid ) ) as min_c_id
  , ( select b.aid from b where b.id = d.bid ) as b_aid
  , ( select min( c.id ) from c where c.aid = 1 ) as min_c_id_1
  , ( select min( c.id ) from c where c.aid = 2 ) as min_c_id_2
from d ;

 min_c_id | b_aid | min_c_id_1 | min_c_id_2
--+---++
1 | 1 |  1 |  3
1 | 2 |  1 |  3

I expected for min_c_id in the second row 3.

Best Regards
Andreas

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #4684: lastval in function

2009-02-28 Thread andreas

The following bug has been logged online:

Bug reference:  4684
Logged by:  andreas
Email address:  postgre...@elbrief.de
PostgreSQL version: 8.3.6
Operating system:   linux
Description:lastval in function
Details: 

create table bla ( id serial primary key , name text not null unique ) ;
create table bla2 ( id serial primary key , blaid int references bla , name
text not null unique ) ;
create or replace function blaa( text ) returns int as $$
  my $name = shift ;
  my $q = spi_exec_query( "select id from bla where name = '$name'" ) ;
  if ( not $q->{ rows }->[ 0 ]->{ id } ) {
spi_exec_query( "insert into bla ( name ) values ( '$name' )" ) ;
$q = spi_exec_query( "select lastval() as id" ) ;
  }
  return $q->{ rows }->[ 0 ]->{ id } ;
$$ language plperl security definer ;
select blaa( 'test' ) ;
insert into bla2 ( blaid , name ) values ( blaa( 'muster' ) , 'muster' ) ;

select lastval() ;
 lastval
-
   2

i expected lastval() should be 1, because this is the id from the
insertstatement.

insert into bla2 ( blaid , name ) values ( blaa( 'muster2' ) , blaa(
'muster3' ) ) ;

select lastval() ;
 lastval
-
   4

if nextval is used inside a function in a insertstatement, you get always
the value from inside the last function. but i expected, that lastval()
deliver the value from the insertstatement. i think, this should clearify in
the documentation, or better fixed that the nextval from an insertstatement
is called after the functioncalls.

Andreas

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #4684: lastval in function

2009-03-02 Thread Andreas

Heikki Linnakangas schrieb:

andreas wrote:

if nextval is used inside a function in a insertstatement, you get always
the value from inside the last function. but i expected, that lastval()
deliver the value from the insertstatement. i think, this should 
clearify in
the documentation, or better fixed that the nextval from an 
insertstatement

is called after the functioncalls.


Well, others might expect the opposite, like existing applications. Want 
to suggest wording for the documentation?


Yes, existing applications might be involved. But i think, this is a
very rare situation. But if someone use an insertstatement with a
functioncall, but the function do NOT use nextval, and he use lastval to
estimate the last inserted value he has no problems. But if then someone
change the function so the function use nextval then the application
crashes. So i think it is much more better to change the behavior,
because this is what i expect. And i think, this is what others
expect too.

Andreas

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #3246: User "name" could not be created.

2007-04-20 Thread Andreas

The following bug has been logged online:

Bug reference:  3246
Logged by:  Andreas
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.2
Operating system:   Windows Vista
Description:User "name" could not be created.
Details: 

Hi

I can not install postgreSQL 8.2 or 8.1 on my computer whit Windows Vista.

Everytime I try to install it stops at 99% .I just get the error message:
"User "name" could not be created. Access Denied!"  

What does that mean?

I have looked at the FAQ but not found any help for my problem.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[BUGS] BUG #3741: translation mistake

2007-11-10 Thread Andreas

The following bug has been logged online:

Bug reference:  3741
Logged by:  Andreas
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.2.4/8.3.beta2
Operating system:   debian gnu linux
Description:translation mistake
Details: 

FEHLER:  Spalte »...« muss in der GROUP-BY-Klausel erscheinen oder in
einer Aggregatfunktion verwendent werden

the word verwendent is incorrect, correct is verwendet.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[BUGS] BUG #3926: Typo in documentation

2008-02-03 Thread Andreas

The following bug has been logged online:

Bug reference:  3926
Logged by:  Andreas
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.3
Operating system:   All
Description:Typo in documentation
Details: 

There is a typo in http://www.postgresql.org/docs/8.3/static/sql-vacuum.html
 at the description of Parameters VERBOSE. There is double max_fsm_pages.
Correct is, i think, max_fsm_pages, max_fsm_relations.

Andreas

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [BUGS] BUG #5705: btree_gist: Index on inet changes query result

2010-10-21 Thread andreas
On Tue, 2010-10-19 at 18:22 -0400, Tom Lane wrote:
> Robert Haas  writes:
> > On Mon, Oct 11, 2010 at 7:50 PM, Tom Lane  wrote:
> >> Well, actually the btree_gist implementation for inet is a completely
> >> broken piece of junk: it thinks that convert_network_to_scalar is 100%
> >> trustworthy and can be used as a substitute for the real comparison
> >> functions, which isn't even approximately true.
> 
> > Are you planning to fix this?
> 
> No.  I don't understand why Teodor did it like that, so I'm not going
> to try to change it.  I'd be willing to take responsibility for ripping
> out btree_gist's inet support altogether ...
> 
>   regards, tom lane

That is the reason why I just reported it instead of trying to fix it
myself first. Since I could not understand why it was done like that, I
did not feel like fixing it.

Best regards,
Andreas Karlsson



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #4421: convert_to() should be immutable

2008-09-18 Thread Andreas Peer

Tom Lane wrote:

Heikki Linnakangas <[EMAIL PROTECTED]> writes:
  

[EMAIL PROTECTED] wrote:


The function convert_to(string text, dest_encoding name) is not allowed to
be used in a index expression, because it is not marked as "IMMUTABLE".
  


  
You can change the way a conversion is done with CREATE/DROP CONVERSION. 
That's why it can't be IMMUTABLE.



The other reason is that it depends on the database encoding.  I suppose
you could make an argument that that's fixed for as long as IMMUTABLE
needs to think about --- but we'd have to remember to undo the marking
if database encoding ever becomes less fixed.

Just out of curiosity, what's the use-case for this function in an index
anyway?

regards, tom lane

  

Thank you all for the responses!
Well, the use case is a strange one... I would like to use a varchar() 
column for storing a variable-length vector of integers. The numbers are 
represented by the codepoints. Therefore, I need to sort them as binary 
data, not as characters. I would often need to get all the vectors that 
lie in between to vectors, therefore I need the "binary" index.
And the code should be as database independent as possible, therefore I 
cannot use an array or another data type that may not be supported by 
other DBMS.


Regards,
Andreas Peer

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #4421: convert_to() should be immutable

2008-09-19 Thread Andreas Peer
Tom Lane schrieb:
> Andreas Peer <[EMAIL PROTECTED]> writes:
>   
>> Well, the use case is a strange one... I would like to use a varchar() 
>> column for storing a variable-length vector of integers. The numbers are 
>> represented by the codepoints. Therefore, I need to sort them as binary 
>> data, not as characters. I would often need to get all the vectors that 
>> lie in between to vectors, therefore I need the "binary" index.
>> 
>
> Use bytea maybe?
>
>   
Than I cannot access the single characters anymore (or I just use bytes,
but that limits a number to 256...)
>> And the code should be as database independent as possible, therefore I 
>> cannot use an array or another data type that may not be supported by 
>> other DBMS.
>> 
>
> I can't imagine how you'd think that a functional index on convert_to()
> would be the most portable solution ...
>   
nearly every DBMS has a function for converting character strings to
binary strings, I would just have to change the function
name/parameters, but the rest of the query could be the same
>   regards, tom lane
>
>   
Regards,
Andreas Peer

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] data loss with pg_standby when doing a controlled failover

2009-04-06 Thread Andreas Pflug
Running 8.3.7, I have a warm standby configuration with a
archive_timeout of 10min.

It's obvious that there's a 10min period where data could be lost if the
master fails and the warm standby server has to take over. What's not
obvious is that this is true even if the master server is shut down
regularly, because it will not write out a last log segment to the
archive. As a consequence, when doing a controlled failover (for
maintenance purposes or so) all data changed after the last archive copy
will be lost.
IMHO this should be mentioned in the docs explicitly (I find it quite
surprising that data can be lost even if the system is shutdown
correctly), or better when shutting down the postmaster should spit all
log segments containing all changes when archiving is on so the warm
standby server can catch up.

Regards.
Andreas

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] data loss with pg_standby when doing a controlled failover

2009-04-06 Thread Andreas Pflug
Scott Mead wrote:
>
>
>
> On Mon, Apr 6, 2009 at 7:37 AM, Andreas Pflug
> mailto:pgad...@pse-consulting.de>> wrote:
>
> Running 8.3.7, I have a warm standby configuration with a
> archive_timeout of 10min.
>
> It's obvious that there's a 10min period where data could be lost
> if the
> master fails and the warm standby server has to take over. What's not
> obvious is that this is true even if the master server is shut down
> regularly, because it will not write out a last log segment to the
> archive. As a consequence, when doing a controlled failover (for
> maintenance purposes or so) all data changed after the last
> archive copy
> will be lost.
> IMHO this should be mentioned in the docs explicitly (I find it quite
> surprising that data can be lost even if the system is shutdown
> correctly), or better when shutting down the postmaster should
> spit all
> log segments containing all changes when archiving is on so the warm
> standby server can catch up.
>
>
>
> You make an excellent point.  If you're looking for a way to mitigate
> this risk, run:
>
> select pg_switch_xlog() ;
>
>Before shutting down.
Sort of, unless some other user succeeds to commit a transaction after
pg_switch_xlog, and before the database ceases operation.

My "graceful failover" procedure now includes this workaround:
- shutdown server
- restart server with --listen_addresses='' to prevent other users to
connect (there are no local users on the server machine)
- pg_switch_xlog()
- shutdown finally
- let the warm server continue

Regards,
Andreas



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] data loss with pg_standby when doing a controlled failover

2009-04-06 Thread Andreas Pflug
Guillaume Smet wrote:
> On Mon, Apr 6, 2009 at 1:37 PM, Andreas Pflug  
> wrote:
>   
>> IMHO this should be mentioned in the docs explicitly (I find it quite
>> surprising that data can be lost even if the system is shutdown
>> correctly), or better when shutting down the postmaster should spit all
>> log segments containing all changes when archiving is on so the warm
>> standby server can catch up.
>> 
>
> See also this thread which might be interesting for you:
> http://archives.postgresql.org/message-id/3f0b79eb0903242329j12865d55s348f5c873a956...@mail.gmail.com
>   
It is, though not related to this problem. I'd expect pg_standby's
default behaviour to be like the pseudocode's in the warm-standby
documentation. To me, it's kind of unexpected that it won't continue
restoring if the trigger file is present (as Kevin said, what's the use
case for the current behaviour?).

Regards,
Andreas

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Bug in pg_dump.exe/pg_restore (Version 8.4 beta 2)

2009-06-09 Thread Andreas Nolte
Hello,

I'm testing now for almost a day, and I'm pretty sure now that this is a bug.

I tried to collect as much information for you as possible.

 

following behaviour:

 

I have a UTF8 encoded database (with name MANZPartCarrier), running on 
WindowsXP. Dump/Restore work fine with PostgreSQL 8.3.6, but not with 8.4 beta 
2.

 

 

// 1. Create the database, via silent installation

msiexec /i Installer\postgresql-8.4-beta2-int.msi  /qn INTERNALLAUNCH=1 
DOSERVICE=0 NOSHORTCUTS=1 DOINITDB=0 
ADDLOCAL="server,psql,pgadmin,includefiles,libfiles,nls" PERMITREMOTE=1 
BASEDIR=%pgPath% /Lime Log\PGServerInstallationLog.txt

 

// 2. Init DB with UTF8 settings

%pgPath%\bin\initdb.exe --username="sa" --pwfile="SAPassword.txt" 
--pgdata=%pgPath%\data --encoding=UTF8 --locale="English_United States" 
--xlogdir=%walPath%

 

// 3. Backup the DB (also tried with explicid -E UTF8 or -E Windows_1252)

%backupPath%\..\bin\pg_dump.exe -h localhost -p 5432 -U sa -F c -b -v -f 
%backupPath%\%ManzPartCarrierFileName% "MANZPartCarrier"

 

 

// Some intermediate steps (from my point of view not important)

  // Stopping database, to disconnect connections and background tasks

  pg_ctl.exe stop -w -D "..\data" -m fast

 

  // Restart database

  Net.exe start pgMANZDB

 

  // Drop current database

  // dropdb -U sa MANZPartCarrier

 

// 4. Restore the db

..\bin\pg_restore.exe -h localhost -p 5432 -U sa -C -d postgres -v 
"%backupPath%\MANZPartCarrier_Original.backup"

 

--> Result:

A warning appear on the console: WARNING: errors ignored on restore: 152

 

The db log shows:

... CEST ERROR: syntax error at or near "@" at character 33

... CEST STATEMENT: ALTER FUNCTION public.sp_pp_set(@ppnr integer, @JobID 
character varying) OWNER TO sa;

 

It fails to restore the parameters of a lot of functions... only 2 functions 
(without any parameter) are restored fine.

 

 

 

 

 

Beste Gruesse / best regards

Andreas Nolte

 

Manz Automation AG
Andreas Nolte
Entwicklung / Applikation Software HMI & SCADA

R & D / Application Software HMI & SCADA

 

Steigaeckerstrasse 5
D-72768 Reutlingen
T +49.(0)7121.9000-844
F +49.(0)7121.9000-279

E ano...@manz-automation.com
www.manz-automation.com <http://www.manz-automation.com/> 

 








Manz Automation AG - Steigaeckerstr. 5 - 72768 Reutlingen
Handelsregister: HRB 353989 Registergericht Stuttgart
Vorstandsvorsitzender: Dieter Manz
Vorstand: Dieter Manz, Otto Angerhofer, Martin Hipp, Volker Renz
Aufsichtsratsvorsitzender: Dr. Jan Wittig
_
Diese E-Mail ist nur fuer den Empfaenger bestimmt, an den sie gerichtet
ist und kann vertrauliches bzw. unter das Berufsgeheimnis fallendes
Material enthalten. Jegliche darin enthaltene Ansicht oder Meinungs-
aeusserung ist die des Autors und stellt nicht notwendigerweise die
Ansicht oder Meinung der Manz Automation AG dar.
Sind Sie nicht der Empfaenger, so haben Sie diese E-Mail irrtuemlich
erhalten und jegliche Verwendung, Veroeffentlichung, Weiterleitung,
Abschrift oder jeglicher Druck dieser E-Mail ist strengstens untersagt.

Weder die Manz Automation AG noch der Absender (Andreas Nolte)
uebernehmen die Haftung fuer Viren; es obliegt Ihrer Verantwortung,
die E-Mail und deren 0 Anhaenge auf Viren zu pruefen.
0 Anhaenge:

_
This e-mail is intended only for the recipient to whom it is addressed
and may include confidential or proprietary information.
Any views or opinions presented in this email are
solely those of the author and do not necessarily represent
those of Manz Automation AG.
If you are not the intended recipient, any disclosure, copying,
or distribution is strictly prohibited.

Neither Manz Automation AG nor the sender (Andreas Nolte)
accepts any responsibility for loss or damage caused by any virus transmitted 
by this e-mail.
It is the responsibility of the recipient to check this e-mail and any 
attachments for viruses.
0 Attachments:

_


[BUGS] Postgres

2009-06-10 Thread Andreas Mutota
Hallow,

 

I am wondering if your server is up and running. I have been trying to
register myself with user list-forum. I could not getting through.

 

I have quick query concerning about  postgres installation in Linux machine.
I have managed installing Postgres in my machine(Linux).

 

I could create database and tables but I cannot populate data into any of
the tables. 

 

How would I make this possible.

 

Best regards,

 

Andreas Mutota

GIS Analyst Programmer

Nationl Planning Commission

Central Bureau of Statistics

Private Bag 13356, Windhoek

Republic of Namibia

TeL: +264 61 283 4074

Mobile: +264 812 886584

or

Email: amutota2...@yahoo.com

 



[BUGS] BUG #4881: KDE 4.2 not running any more when installing one click installer from Enterprise DB

2009-06-25 Thread Andreas Wenk

The following bug has been logged online:

Bug reference:  4881
Logged by:  Andreas Wenk
Email address:  a.w...@netzmeister-st-pauli.de
PostgreSQL version: 8.4 RC1
Operating system:   Linux Kubuntu 9.04 Jaunty
Description:KDE 4.2 not running any more when installing one click
installer from Enterprise DB
Details: 

Hi there,

I am running a Kubuntu 9.04 Jaunty OS. I wanted to check out the one click
installer from http://www.enterprisedb.com/getfile.jsp?fileid=570 - the file
is postgresql-8.4.0-rc1-1-linux.bin. Downloaded 2009-06-18 00:09. The
installation process is easy and working fine (no extra packages installed).
Also postgresql and pgAdmin is running fine. But after shutting down the
system, restarting it, giving username and password to the login prompt, kde
4.2 try's to start but crashes totally with the message (like) "Could not
start kdeinit. Pleasye check your installation". Ok - trying to run a save
session but no way. Then I uninstalled postgresql with the uninstaller
program, rebooting and everything works fine again.

I copuld not figure out what the problem is in more depth.

Cheers

Andy

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] 1-Click 8.4 win32 fails to run scripts

2009-07-07 Thread Andreas Pflug
The installer claimed a non-fatal problem, cluster is up and running.
Excerpt from install-postgresql.log

Installing pl/pgsql in the template1 databases...
psql: Warnung: berflssiges Kommandozeilenargument ¯CREATE LANGUAGE
plpgsql;® ignoriert
psql: Warnung: berflssiges Kommandozeilenargument ¯template1® ignoriert
psql: konnte nicht mit dem Server verbinden: Cannot assign requested
address (0x2741/10049)
L„uft der Server auf dem Host ¯???® und akzeptiert er
TCP/IP-Verbindungen auf Port -U?

The single deviation from default params was a custom port number 54384
(installation with de_DE).

Regards,
Andreas




-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] 1-Click 8.4 win32 fails to run scripts

2009-07-15 Thread Andreas Pflug
This issue still seems unaddressed.


Andreas Pflug wrote:
> The installer claimed a non-fatal problem, cluster is up and running.
> Excerpt from install-postgresql.log
>
> Installing pl/pgsql in the template1 databases...
> psql: Warnung: berflssiges Kommandozeilenargument ¯CREATE LANGUAGE
> plpgsql;® ignoriert
> psql: Warnung: berflssiges Kommandozeilenargument ¯template1® ignoriert
> psql: konnte nicht mit dem Server verbinden: Cannot assign requested
> address (0x2741/10049)
> L„uft der Server auf dem Host ¯???® und akzeptiert er
> TCP/IP-Verbindungen auf Port -U?
>
> The single deviation from default params was a custom port number 54384
> (installation with de_DE).
>
> Regards,
> Andreas
>
>
>
>
>   


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5057: Binaries missing

2009-09-15 Thread Andreas Kuckartz

The following bug has been logged online:

Bug reference:  5057
Logged by:  Andreas Kuckartz
Email address:  a.kucka...@ping.de
PostgreSQL version: 8.2.14
Operating system:   Windows
Description:Binaries missing
Details: 

"Only PostgreSQL 8.2 and above are supported on Windows."
http://www.postgresql.org/download/windows

But there are no binaries available for 8.2.14. In fact binaries for that
version are only offered for Linux.

That is a problem because the announcement on the start page states: "Update
releases 8.4.1, 8.3.8, 8.2.14, 8.1.18, 8.0.22 and 7.4.26 are now available
for download. These releases include security-related fixes which should be
installed as soon as possible."

Normal users will assume that binaries are available and not just source
code. I wasted time to find them which I better could have used to build the
binaries.

In fact I later spent time to build the binaries but I still do not have a
usable installer package.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Re: [PATCHES] Patch for Linux-IA64

2000-07-06 Thread Andreas Schwab

Bruce Momjian <[EMAIL PROTECTED]> writes:

|> The Makefile.shlib changes will have to be discussed with other Linux
|> developers so we are sure it will work on all platforms.

The problem with the current settings is that the linker is called
directly.  This is wrong, it should always be called through the compiler
driver (the only exception is `ld -r').  This will make sure that the
necessary libraries like libgcc are linked in.

But there is still a different problem with the setting of LDFLAGS_ODBC.
The psqlodbc module defines the functions _init and _fini which are
reserved for the shared library initialisation.  These should be changed
to constructor functions.  Then LDFLAGS_ODBC can be changed to be just
`-lm'.  Btw, why does it use -Bsymbolic?

Here is a patch to implement this:

--- src/Makefile.shlib
+++ src/Makefile.shlib  2000/07/05 16:51:27
@@ -145,9 +145,9 @@
 ifeq ($(PORTNAME), linux)
   install-shlib-dep:= install-shlib
   shlib:= 
lib$(NAME)$(DLSUFFIX).$(SO_MAJOR_VERSION).$(SO_MINOR_VERSION)
-  LDFLAGS_SL   := -Bdynamic -shared -soname $(shlib)
-  LDFLAGS_ODBC := -Bsymbolic -lc -lm
-  SHLIB_LINK   += -lc
+  LD   := $(CC)
+  LDFLAGS_SL   := -shared -Wl,-soname,$(shlib)
+  LDFLAGS_ODBC := -lm
   CFLAGS   += $(CFLAGS_SL)
 endif
 
--- src/interfaces/odbc/psqlodbc.c
+++ src/interfaces/odbc/psqlodbc.c  2000/07/06 10:01:04
@@ -33,8 +33,6 @@
 
 GLOBAL_VALUES globals;
 
-BOOL _init(void);
-BOOL _fini(void);
 RETCODE SQL_API SQLDummyOrdinal(void);
 
 #ifdef WIN32
@@ -96,6 +94,20 @@
 #ifndef FALSE
 #define FALSE  (BOOL)0
 #endif
+
+#ifdef __GNUC__
+
+/* This function is called at library initialization time.  */
+
+static BOOL
+__attribute__((constructor))
+init(void)
+{
+   getGlobalDefaults(DBMS_NAME, ODBCINST_INI, FALSE);
+   return TRUE;
+}
+
+#else
 
 /* These two functions do shared library initialziation on UNIX, well at least
  * on Linux. I don't know about other systems.

Andreas.

-- 
Andreas Schwab  "And now for something
SuSE Labscompletely different."
[EMAIL PROTECTED]
SuSE GmbH, Schanzäckerstr. 10, D-90443 Nürnberg



[BUGS] low performance

2001-08-20 Thread Andreas Wernitznig

I am running the precomplied binary of Postgreql 7.1.2 on a Redhat 7.1 (on a Dual 
Celeron System with 256MB, kernel 2.4.4 and 2.4.5) System.
(The installation of the new 7.1.3 doesn't seem to solve the problem)

I am connecting to the DB with a Perl Program (using Perl 5.6.0 with DBD-Pg-1.01 and 
DBI-1.19).
The program inserts some million rows into a db with about 30 tables. The processing 
takes (if everyting works fine) about 10 hours to complete. Usually the my Perl-Script 
and the database share the available CPU time 50:50.
But sometimes the database is very slow eating up most (>98%) of the available CPU 
time.
(Of cause I know VACUUM and VACUUM ANALYZE, this is not the problem).

The only thing that seems to help then, is killing the perl script, stopping 
postgresql, running "ipcclean", and start again from the beginning. If it works from 
the beginning, the database is ususally very fast until all data are processed.

But if someone else connects (using psql), sometimes the database gets very slow until 
it is using all the CPU time.

There are no error messages at postgres-startup. 
I already increased the number of buffers to 2048 (doesn't help)

I cannot reproduce these problems, sometimes the db is fast, sometimes very slow. The 
perl script doesn't seem to be the problem, because I wrote all SQL Commands to a file 
and processed them later ("psql dbname postgres < SQL-File").
Same thing: sometimes slow sometimes fast.

Andreas


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[BUGS] low performance

2001-08-20 Thread Andreas Wernitznig

I am running the precomplied binary of Postgreql 7.1.2 on a Redhat 7.1 (on a Dual 
Celeron System with 256MB, kernel 2.4.4 and 2.4.5) System.
(The installation of the new 7.1.3 doesn't seem to solve the problem)

I am connecting to the DB with a Perl Program (using Perl 5.6.0 with DBD-Pg-1.01 and 
DBI-1.19).
The program inserts some million rows into a db with about 30 tables. The processing 
takes (if everyting works fine) about 10 hours to complete. Usually the my Perl-Script 
and the database share the available CPU time 50:50.
But sometimes the database is very slow eating up most (>98%) of the available CPU 
time.
(Of cause I know VACUUM and VACUUM ANALYZE, this is not the problem).

The only thing that seems to help then, is killing the perl script, stopping 
postgresql, running "ipcclean", and start again from the beginning. If it works from 
the beginning, the database is ususally very fast until all data are processed.

But if someone else connects (using psql), sometimes the database gets very slow until 
it is using all the CPU time.

There are no error messages at postgres-startup. 
I already increased the number of buffers to 2048 (doesn't help)

I cannot reproduce these problems, sometimes the db is fast, sometimes very slow. The 
perl script doesn't seem to be the problem, because I wrote all SQL Commands to a file 
and processed them later ("psql dbname postgres < SQL-File").
Same thing: sometimes slow sometimes fast.

Andreas


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [BUGS] low performance

2001-08-20 Thread Andreas Wernitznig


It is running on many transactions. At least after 5 inserts a transaction is commited.
The filesystems doesn't get slow (reading a (big) file works still at >20 MBytes/s).

14839 postgres  20   0 19948  19M 18980 R98.5  7.7 477:24 postmaster
14819 postgres   8   0  1856 1856  1700 S 0.0  0.7   0:00 postmaster
14838 andreas9   0 15228  14M  1796 S 0.7  5.9  11:58 parse.pl

The main postmaster is job 14819 (0.0% CPU). The postmaster spawned by perl is sucking 
up 98.5% CPU.

cat /proc/meminfo writes:

total:used:free:  shared: buffers:  cached:
Mem:  261959680 260149248  18104320  6115328 129863680
Swap: 133885952   204800 133681152
MemTotal:   255820 kB
MemFree:  1768 kB
MemShared:   0 kB
Buffers:  5972 kB
Cached: 126820 kB
Active:  38432 kB
Inact_dirty: 83408 kB
Inact_clean: 10952 kB
Inact_target:  520 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal:   255820 kB
LowFree:  1768 kB
SwapTotal:  130748 kB
SwapFree:   130548 kB


On Mon, 20 Aug 2001 10:28:04 -0700 (MST)
grant <[EMAIL PROTECTED]> wrote:

> Is this running as one transaction, or is it not a transaction?  Have you
> tried committing every 10,000 or so if it is in one transaction?  It could
> be a logging problem with the transaction being too big.
> 
> Does the file system as a whole get slow, or just Postgres?  Is it one
> connection, or does it disconnect and reconnect a lot?
> 
> Is it the main postmaster sucking up all the CPU, or the one spawned by
> the PERL, or the one spawned by psql?
> 
> How much do the file system cache and io buffers grow?
> __
> 
>   Your mouse has moved.
>You must restart Windows for your changes to take effect.
> 
> #!/usr/bin/perl
> print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);
> 
> 
> 



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[BUGS] Re: low performance

2001-08-21 Thread Andreas Wernitznig

I am aware of the performance drawbacks because of indices and triggers. In fact I 
have a trigger and an index on the most populated table.
It is not possible in my case to remove the primary keys during insert, because the 
database structure and foreign keys validate my data during import. 

The problem is, that sometimes the performance is good, and sometimes the database is 
awfully slow.
If it is slow, postgres is eating up all CPU time and it takes at least 150 times 
longer to insert the data.
I don't know why and what to do against that.

Andreas

On Mon, 20 Aug 2001 19:39:31 -0400
Jonas Lindholm <[EMAIL PROTECTED]> wrote:

> Do you have any index on the tables ? Any triggers ?
> 
> If you want to insert 1 million rows you should drop the index, insert the data and 
>then recreate the index.
> You should also try the COPY command to insert the data.
> 
> You should also avoid having anyone to connect to the database when you insert a lot 
>of rows, and 1 million rows are a lot of rows for any database.
> 
> I've been able to insert, in one table, 17 million record in ~3 hours on a Compaq 
>SMP 750 Mhz with 512MB
> by dropping the index, using several COPY commands at the same time loading 
>different parts of the data and then creating the index again.
> At the time of the inserts no other processes than the COPY's was connected to the 
>database.
> 
> /Jonas Lindholm
> 
> 
> Andreas Wernitznig wrote:
> 
> > I am running the precomplied binary of Postgreql 7.1.2 on a Redhat 7.1 (on a Dual 
>Celeron System with 256MB, kernel 2.4.4 and 2.4.5) System.
> > (The installation of the new 7.1.3 doesn't seem to solve the problem)
> >
> > I am connecting to the DB with a Perl Program (using Perl 5.6.0 with DBD-Pg-1.01 
>and DBI-1.19).
> > The program inserts some million rows into a db with about 30 tables. The 
>processing takes (if everyting works fine) about 10 hours to complete. Usually the my 
>Perl-Script and the database share the available CPU time 50:50.
> > But sometimes the database is very slow eating up most (>98%) of the available CPU 
>time.
> > (Of cause I know VACUUM and VACUUM ANALYZE, this is not the problem).
> >
> > The only thing that seems to help then, is killing the perl script, stopping 
>postgresql, running "ipcclean", and start again from the beginning. If it works from 
>the beginning, the database is ususally very fast until all data are processed.
> >
> > But if someone else connects (using psql), sometimes the database gets very slow 
>until it is using all the CPU time.
> >
> > There are no error messages at postgres-startup.
> > I already increased the number of buffers to 2048 (doesn't help)
> >
> > I cannot reproduce these problems, sometimes the db is fast, sometimes very slow. 
>The perl script doesn't seem to be the problem, because I wrote all SQL Commands to a 
>file and processed them later ("psql dbname postgres < SQL-File").
> > Same thing: sometimes slow sometimes fast.
> >
> > Andreas
> 


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [BUGS] Re: low performance

2001-08-22 Thread Andreas Wernitznig

I took option 1 and managed to create a profile of a slow and a fast run:

The frequent functions of the FAST run:

  %   cumulative   self  self total   
 time   seconds   secondscalls  Ts/call  Ts/call  name
  0.00  0.00 0.00 15725437 0.00 0.00  AllocSetAlloc
  0.00  0.00 0.00 15380742 0.00 0.00  MemoryContextAlloc
  0.00  0.00 0.00 11296700 0.00 0.00  ExecEvalExpr
  0.00  0.00 0.00  8276639 0.00 0.00  newNode
  0.00  0.00 0.00  5430717 0.00 0.00  MemoryContextSwitchTo
  0.00  0.00 0.00  4492641 0.00 0.00  LockBuffer
  0.00  0.00 0.00  4425642 0.00 0.00  AllocSetFree
  0.00  0.00 0.00  4356571 0.00 0.00  pfree
  0.00  0.00 0.00  3873174 0.00 0.00  pq_getbyte
  0.00  0.00 0.00  3799725 0.00 0.00  appendStringInfoChar

The frequent functions of the SLOW run:

  %   cumulative   self  self total   
 time   seconds   secondscalls  Ts/call  Ts/call  name
  0.00  0.00 0.00 27832819 0.00 0.00  ExecEvalExpr
  0.00  0.00 0.00 19040887 0.00 0.00  AllocSetAlloc
  0.00  0.00 0.00 18976313 0.00 0.00  MemoryContextAlloc
  0.00  0.00 0.00 18722462 0.00 0.00  LockBuffer
  0.00  0.00 0.00 18684832 0.00 0.00  MemoryContextSwitchTo
  0.00  0.00 0.00 18442039 0.00 0.00  pg_detoast_datum
  0.00  0.00 0.00 16947638 0.00 0.00  AllocSetFree
  0.00  0.00 0.00 16934648 0.00 0.00  pfree
  0.00  0.00 0.00  9716164 0.00 0.00  SpinAcquire
  0.00  0.00 0.00  9716164 0.00 0.00  SpinRelease

Since these files are to big for a posting, I have put the whole profile files on:
ftp://ftp.insilico.com/out.fast.gz
ftp://ftp.insilico.com/out.slow.gz

I don't know why the time column and number of seconds is zero in all the cases.
I am using the Redhat 7.1 binutils (binutils-2.10.91.0.2-3).

On Tue, 21 Aug 2001 17:38:23 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:

> Andreas Wernitznig <[EMAIL PROTECTED]> writes:
> > I am aware of the performance drawbacks because of indices and
> > triggers. In fact I have a trigger and an index on the most populated
> > table.  It is not possible in my case to remove the primary keys
> > during insert, because the database structure and foreign keys
> > validate my data during import.
> 
> Foreign keys eh?
> 
> > The problem is, that sometimes the performance is good, and sometimes
> > the database is awfully slow.  If it is slow, postgres is eating up
> > all CPU time and it takes at least 150 times longer to insert the
> > data.  I don't know why and what to do against that.
> 
> We found some foreign-key-related performance problems not long ago,
> and it could be you're happening on another one.  However there's not
> enough info here to figure it out.  I can offer you two alternatives:
> 
> 1. Compile up the backend with profiling enabled (if you're using gcc
> then "make PROFILE=-pg clean all" in src/backend should do the trick).
> Collect profiles for both a "normal" and a "slow" run and send them in.
> 
> 2. Develop a self-contained example that exhibits the problem, and send
> it along for someone else to profile.
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html
> 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [BUGS] Re: low performance

2001-08-28 Thread Andreas Wernitznig

On Wed, 22 Aug 2001 19:19:42 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:

> Andreas Wernitznig <[EMAIL PROTECTED]> writes:
> > I took option 1 and managed to create a profile of a slow and a fast run:
> 
> It's difficult to compare these profiles, because they seem to be taken
> over very different numbers of queries --- did you let the "fast" run
> process more queries than the "slow" one?

Both runs where running for about 5 minutes. The fast one processes about 10 times 
more data than the slow one. After a longer time this ratio gets even worse. The 
databases contain about 13 rows before the start of the run.

To make it more comparable I have made two additional runs, a slow and a fast one with 
exactly the same number of inserts (about 20500) and put it on our ftp server:

ftp://ftp.insilico.com/def.fast.gz
ftp://ftp.insilico.com/def.slow.gz

The fast run takes about 5 minutes, the slow one took 59 minutes.

*SLOW* Flat profile:

Each sample counts as 0.01 seconds.
  %   cumulative   self  self total   
 time   seconds   secondscalls  Ts/call  Ts/call  name
100.00  0.01 0.01 reaper
  0.00  0.01 0.00 555836881 0.00 0.00  ExecEvalExpr
  0.00  0.01 0.00 372917548 0.00 0.00  LockBuffer
  0.00  0.01 0.00 369206213 0.00 0.00  MemoryContextSwitchTo
  0.00  0.01 0.00 366016306 0.00 0.00  pg_detoast_datum
  0.00  0.01 0.00 342868028 0.00 0.00  AllocSetAlloc
  0.00  0.01 0.00 342579484 0.00 0.00  MemoryContextAlloc
  0.00  0.01 0.00 333623357 0.00 0.00  AllocSetFree
  0.00  0.01 0.00 333565521 0.00 0.00  pfree
  0.00  0.01 0.00 189738579 0.00 0.00  SpinAcquire
  0.00  0.01 0.00 189738579 0.00 0.00  SpinRelease


*FAST* Flat profile:
Each sample counts as 0.01 seconds.
 no time accumulated

  %   cumulative   self  self total   
 time   seconds   secondscalls  Ts/call  Ts/call  name
  0.00  0.00 0.00 13439626 0.00 0.00  AllocSetAlloc
  0.00  0.00 0.00 13151082 0.00 0.00  MemoryContextAlloc
  0.00  0.00 0.00  8194227 0.00 0.00  ExecEvalExpr
  0.00  0.00 0.00  6962789 0.00 0.00  newNode
  0.00  0.00 0.00  4072271 0.00 0.00  MemoryContextSwitchTo
  0.00  0.00 0.00  3931730 0.00 0.00  AllocSetFree
  0.00  0.00 0.00  3873894 0.00 0.00  pfree
  0.00  0.00 0.00  3389344 0.00 0.00  LockBuffer
  0.00  0.00 0.00  3253378 0.00 0.00  pq_getbyte
  0.00  0.00 0.00  3191526 0.00 0.00  appendStringInfoChar


What does the first row of the slow run mean (reaper) ?

> However, I think what is happening is that some queries are being done
> as indexscans in the fast case and seqscans in the slow case.  The
> ratio of ExecIndexScan calls to ExecSeqScan calls is vastly different
> in the two profiles.

Does the new profiles proof that assumption ?
 
> It looks like you are doing vacuums fairly frequently, so I speculate
> that the statistics gathered by vacuum are changing just enough to alter
> the planner's choice between indexscan and seqscan for some
> often-executed query type.  Evidently the planner is guessing the costs
> to be nearly the same, so a small change in stats might tip the choice
> --- but in reality the costs are quite a bit different, thus you observe
> fast and slow behavior.

In all of the profiled runs no "vacuum" was executed.

Another question:
When I am executing a run, does a vacuum with another postmaster influence the planner 
of the run ? (By the way: I didn't do that for the profiled samples). 

I am hardly executing any queries during the runs (only inserts). I am only running 
SELECTs on two tables with 600 and 200 rows, respectively.

If I have a database with some data inside and run "vacuum" and "vacuum analyze" and 
then delete the data and start the run it is FAST.
If I run "vacuum" and "vacuum analyze" on an empty database, the following run will be 
a SLOW one.
It seems that the planner plans a "Seq Scan" if vacuum analyze was executed on a empty 
database and an "Index Scan" if vacuum analyze was executed on a full database. (as 
expected)

> The next step should be to get EXPLAIN results for the queries used
> by your application in both fast and slow states.  This will help us
> narrow down where the planner's misprediction is occurring.

I build in some "explain select ..." on the most populated table in my parser script 
(Such a kind of select, I assume, is done by the pk trigger) and got a confirmation 
for the assumption above (Seq Scan vs. Index Scan). 

If I am in a slow

Re: [BUGS] PD: triggered data change violation on relation "tbl_b"

2001-08-29 Thread Andreas Wernitznig

You cannot insert and delete the same data within one transaction.
Only one change of a row is allowed.

Greetings
Andreas

On Wed, 29 Aug 2001 13:18:02 +0200
"Pawel Pawlowski" <[EMAIL PROTECTED]> wrote:

> When I insert to table new row and after this in the same transaction I delete this 
>row I get such error: 
> triggered data change violation on relation "tbl_b"
> 
> I've created database using simple script:
> 
> CREATE TABLE tbl_a
> (
>   pn_id SERIAL,
>   pn_a  VARCHAR(400) NOT NULL,
>   PRIMARY KEY (pn_id)
> );
> CREATE TABLE tbl_b
> (
>   pc_id  INT4 NOT NULL REFERENCES tbl_a (pn_id) ON UPDATE CASCADE ON DELETE 
>CASCADE,
>   pc_b   VARCHAR(40) NOT NULL,
>   PRIMARY KEY (pc_id, pc_b)  
> );
> INSERT INTO tbl_a VALUES (1, 'xxx');
> 
> And this is the sample script that I use to generete this bug:
> 
> begin transaction;
> insert into tbl_b values (1, 'xxx');
> delete from tbl_b where pc_id=1;
> ERROR:  triggered data change violation on relation "tbl_b"
> 
> How to solve this problem ?
 


--
  Andreas Wernitznig
Insilico Software GmbH
 E-Mail: [EMAIL PROTECTED]
Web: www.insilico.com
--

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [BUGS] Re: low performance

2001-09-03 Thread Andreas Wernitznig

Yes, I understand very clearly what you mean. 

Maybe my mails were to confused, that's why I try to explain my problem once more:

step 1. An empty table with a primary key (=index key) where an "explain" tells me, 
that a Seq Scan is used to SELECT a special row.
step 2. Then I start to fill data into that table.
step 3. Then I run a vacuum analyze to update the planner statistics.
step 4. I run an "EXPLAIN select * from  where  = 999;"
step 5. Then I fill in additional data.

What I expect is, that from step 5 on the pk-trigger (I don't know if this mechanism 
that checks for uniqueness is really a trigger) uses the Index to check for possible 
double entries.
Although "EXPLAIN" in step 4 pretend to use an Index Scan the data insert becomes 
slower and slower (>98% of the processor is used by a postmaster). All these steps are 
done with a single connection (postmaster).

The only way to make it faster after step 3 is to close that connection (and stop that 
postmaster thread with it) and establish a new one.
It seems like the planner (at least for pk checking) of an *established* connection to 
a database doesn't receive the information gained from "vacuum analyze".

Greetings 
Andreas

On Mon, 03 Sep 2001 12:26:39 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:

> Andreas Wernitznig <[EMAIL PROTECTED]> writes:
> > To make it more comparable I have made two additional runs, a slow and
> > a fast one with exactly the same number of inserts (about 20500) and
> > put it on our ftp server:
> 
> >> However, I think what is happening is that some queries are being done
> >> as indexscans in the fast case and seqscans in the slow case.  The
> >> ratio of ExecIndexScan calls to ExecSeqScan calls is vastly different
> >> in the two profiles.
> 
> > Does the new profiles proof that assumption ?
> 
> Yes, see for yourself:
> def.fast:
>   0.00  0.00 0.0022481 0.00 0.00  ExecSeqScan
>   0.00  0.00 0.0020161 0.00 0.00  ExecIndexScan
> def.slow:
>   0.00  0.01 0.0041940 0.00 0.00  ExecSeqScan
>   0.00  0.01 0.00  702 0.00 0.00  ExecIndexScan
> 
> So there are about 19500 queries that are being done as indexscans in
> one case and seqscans in the other.
> 
> > If I run "vacuum" and "vacuum analyze" on an empty database, the
> > following run will be a SLOW one.
> 
> The whole point of vacuum analyze is to give the planner some statistics
> about the contents of the tables.  Vacuum analyze when a table is empty
> is useless (even counterproductive, if the table shortly thereafter
> becomes large --- the planner will still think it is empty).
> 
>   regards, tom lane
> 

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[BUGS] backend closed the channel unexpectedly

2001-09-07 Thread Andreas Wernitznig


This is the last part of a "vacuum verbose analyze;":

NOTICE:  --Relation pg_toast_17058--
NOTICE:  Pages 2: Changed 0, reaped 0, Empty 0, New 0; Tup 9: Vac 0, Keep/VTL 0/0, 
Crash 0, UnUsed 0, MinLen 113, MaxLen 2034; Re-using: Free/Avai
l. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec.
NOTICE:  Index pg_toast_17058_idx: Pages 2; Tuples 9. CPU 0.00s/0.00u sec.
NOTICE:  Analyzing...
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.

This usually happens after some millions of rows are inserted into the database.
What can I do to avoid this error?

Greetings
Andreas

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [BUGS] Bug #474: Index using problem

2001-10-05 Thread Andreas Wernitznig

The query optimizer uses the index only if enough data are present in the table.
If only a few data are available a Seq Scan performs better and is therefore used.

Further one of the problems (which is hopfully solved in version 7.2) is that the 
query optimizer used for primary keys/foreign keys is not updated for an uninterrupted 
database connection.

E.g.:
If have an empty database "Seq Scan" is used in any case. Then I connect to the 
database, fill in some data, and execute an "VACUUM ANALYZE" (which updates the 
statistics for the query optimizer).
If I don't disconnect from the database but use the first connection again, the 
database still uses the (now) slower "seq scan" for "primary/foreign key" checking. In 
this case the query optimizer statistics are not updated for established connections. 

My suggestion and question to the postgresql-programmers is now:
Why don't you skip the automatic index creation for primary keys and let the user 
decide to create an index,
that should be used in any case, regardless whether the table is full or empty ?

Andreas

On Fri, 5 Oct 2001 03:21:01 -0400 (EDT)
[EMAIL PROTECTED] wrote:

> JoE Olcsak ([EMAIL PROTECTED]) reports a bug with a severity of 4
> The lower the number the more severe it is.
> 
> Short Description
> Index using problem
> 
> Long Description
> I have a problem when using indexes under PostgreSQL 7.1.1 ...
> 
> The problem is :
>  
>   I have a field of table ... This is an INT4 type field ... and I
>   created an index for this field like :  
> create index idx_of_field on cim(utod_cim_id);
>   ... and I executed a simple query ...
> 
>   explain select * from cim where utod_cim_id=0;
>   
>   NOTICE:  QUERY PLAN:
> 
>   Seq Scan on cim  (cost=0.00..6555.41 rows=253633 width=118)
> 
>   EXPLAIN
>   
>   The query optimizer does not use my index..
> 
>   I'm executed vacuum analyze command for this table but this not   helps me ... 
> 
>   Then I created another index for this field:
> 
>   create index idx_of_field2 on cim(int4(utod_cim_id));
>   
>   ... and I executed another simple query :
> 
>explain select * from cim where int4(utod_cim_id)=0;
>NOTICE:  QUERY PLAN:
> 
>Index Scan using idx_of_field2 on cim  (cost=0.00..6499.70rows=2536 width=118)
> 
>EXPLAIN
>  
> What is wrong in the first case ?
> 
> 
> 
> Sample Code
> 
> 
> No file was uploaded with this report
> 

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [BUGS] Bug #474: Index using problem

2001-10-05 Thread Andreas Wernitznig

1. I created a small database consisting of two connected tables:

create table table1 (
index1int4not null,
textfield  varchar(1000) not null,
constraint PK_table1 primary key (index1)
);

create table table2 (
index2int4not null,
index1int4not null,
textfield varchar(1000) not null,
constraint PK_table2 primary key (index2),
constraint FK_table1 foreign key (index1)
   references table1 (index1)
   on delete restrict on update restrict
);

2. Then I insert 100 rows ($n=1..100) in each of these tables:

insert into table1 VALUES ($n, '123456789');
insert into table2 VALUES ($n, $n, '123456789');

3. then I send a "vacuum analyze" and an "explain select * from table1 where index1 = 
543;"
The output is:
NOTICE:  QUERY PLAN:
Index Scan using pk_table1 on table1  (cost=0.00..2.01 rows=1 width=16)

4. Then I insert 4900 rows into each of these tables like in step 2.



-- Test A: --
Then I send a "vacuum analyze;" 
  and "delete from table2;", 
  and "delete from table1;" 
and rerun steps 2 to 4 -> step 4 takes 39 seconds.

-- Test B: --
Then I send "delete from table2;",
and "delete from table1;", 
and a "vacuum analyze;" 
and rerun steps 2 to 4 -> step 4 takes 81 seconds.

Although the "explain" command tells me that an Index Scan is used, step 4 is much 
slower in Test B.
For both tests (steps 2-4) I use one connection to the database. 
If I quit the connection after step 3 and establish a new connection for step 4 it 
takes 39 seconds in either cases.
-> Using one connection the optimizer for pk/fk-checking is not updated by a "vacuum 
analyze".

On Fri, 05 Oct 2001 09:52:20 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:

> Andreas Wernitznig <[EMAIL PROTECTED]> writes:
> > If I don't disconnect from the database but use the first connection
> > again, the database still uses the (now) slower "seq scan" for
> > "primary/foreign key" checking. In this case the query optimizer
> > statistics are not updated for established connections.
> 
> Sure they are --- in my tests, anyway.  What did you do *exactly*?
> 
>   regards, tom lane
> 

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [BUGS] Bug #474: Index using problem

2001-10-05 Thread Andreas Wernitznig

Why don't you skip the automatic index creation for primary keys and let the user 
decide to create an index,
that should be used in any case, regardless what the query planner recommends ?

On Fri, 05 Oct 2001 15:15:06 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:

> Andreas Wernitznig <[EMAIL PROTECTED]> writes:
> > -> Using one connection the optimizer for pk/fk-checking is not
> > updated by a "vacuum analyze".
> 
> Oh, I misunderstood you the first time: I thought you were saying that
> *other* backends couldn't see the results of the VACUUM.
> 
> The reason for this behavior is that the foreign key checker caches a
> plan for each foreign-key-checking query the first time it needs to
> use that query (within a given backend).  There should be a mechanism
> to flush those cached plans when circumstances change ... but currently
> there isn't.
> 
>   regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [BUGS] Bug #474: Index using problem

2001-10-05 Thread Andreas Wernitznig

I don't know how the primary key checking and the index scan are related in postgresql 
(but I am keen to learn more about it). 
>From Sybase ASA (SQL Anywhere) I know, that these two functions (pk checking, index 
>search) are decoupled.
(that means even a primary key without an index is possible. This makes only sense in 
tables with a few rows).

The pg-planner takes the current number of datasets to calculate a query plan to reach 
the best performance.
If the number of datasets changes (and the user/db-programmer usually knows if it 
changes) the planner sometimes make wrong decisions.
Then I have to execute a "vacuum analyze" or reconnect in case of foreign key checking.

I would like to tune postgresql to use an index in any case if it is available.

On Fri, 05 Oct 2001 18:01:08 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:

> Andreas Wernitznig <[EMAIL PROTECTED]> writes:
> > Why don't you skip the automatic index creation for primary keys
> 
> And how then would we enforce uniqueness of the primary key?
> 
> > and let the user decide to create an index, that should be used in any
> > case, regardless what the query planner recommends ?
> 
> What?  AFAIR the planner does not even *know* whether an index is
> associated with a primary key.  It certainly does not give such an
> index any special preference.
> 
>   regards, tom lane
> 

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[BUGS] Handling large Objects with Tcl/Tk

2001-12-03 Thread Andreas Kretzer

Hi everyone!

While having problems with large objects I discovered
a faulty implementation of the 'rw' parameter for
pg_lo_open.

in the file
src/interfaces/libpgtcl/pgtclCmds.c

the second letter of this parameter is incorporated
into the mode variable by _ANDING_ another value
(INV_READ / INV_WRITE respectively). This of course
will _always_ lead to a zero value.

You better write
mode |= INV_READ;
and
mode |= INV_WRITE;

This bug is still in the 7.2 beta 3 version!

And now a little question (I know this is no forum,
just the bug report!):
Where can I post a question regarding handling
of large objects ??? I still have a probleme there!

Hope you can help with short e-mail.

Andreas


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [BUGS] Bug #630: date/time storage problem: timestamp parsed

2002-04-15 Thread Andreas Schwab

Thomas Lockhart <[EMAIL PROTECTED]> writes:

|> > This is the bug report against glibc that prompted the change:
|> > 
|http://bugs.gnu.org/cgi-bin/gnatsweb.pl?cmd=view%20audit-trail&database=default&pr=2738
|> > |> Ah, but this might explain why I've always seen on my Linux box a 1
|> > |> second offset returned from mktime() for dates before 1970. Everything
|> > |> is shifted to allow -1 to be a special value I'll bet...
|> > This is a joke, isn't it?
|> 
|> Yes and no; the behavior is in localtime(), not mktime() -- sorry for my
|> faulty memory. The case I am handling is in recovering local time given
|> a time_t (in UTC of course). I have independently derived a broken-down
|> time structure, so have both the original structure and the results of
|> localtime() available in my code. Here is the relevant comment snippet:

Do you have a testcase?

Andreas.

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE GmbH, Deutschherrnstr. 15-19, D-90429 Nürnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [BUGS] Bug #630: date/time storage problem: timestamp parsed

2002-04-15 Thread Andreas Schwab

Thomas Lockhart <[EMAIL PROTECTED]> writes:

|> > > I don't think that our code checks explicitly for a "-1" return, since
|> > > the range is checked just before the call, but it would probably be a
|> > > good idea if it did
|> > As I noticd yesterday, glibc's mktime() has in the current snapshot
|> > been changed to return -1 for dates before the epoch. Our glibc guru
|> > (Cc'ed) told me, this is according to the standards (C and POSIX)
|> > which say, that time_t is undefined for dates prior the epoch, which
|> > to me seems obvoius, because otherwise the error return couldn't be
|> > distinguished from the time_t value "one second before the epoch").
|> 
|> ??!! I'm sorry that I don't remember the exact context here (didn't this
|> thread start on a FreeBSD amchine?), but are you saying that glibc
|> shipped with Linux will potentially stop supporting times and time zones
|> before 1970?
|> 
|> Standard or not, there is a *long* history of all decent implementations
|> supporting dates prior to 1970, and platforms which do not do so (AIX?)
|> have always been a source of scorn and derision. Really.

This is the bug report against glibc that prompted the change:

http://bugs.gnu.org/cgi-bin/gnatsweb.pl?cmd=view%20audit-trail&database=default&pr=2738

|> Ah, but this might explain why I've always seen on my Linux box a 1
|> second offset returned from mktime() for dates before 1970. Everything
|> is shifted to allow -1 to be a special value I'll bet...

This is a joke, isn't it?

|> Yikes. That is not currently acceptable (most platforms deployed in the
|> world *do* handle dates and times before 1970), but if I'm understanding
|> things correctly we will need to somehow reimplement the entire time and
|> time zone support system within PostgreSQL. I'll start looking at the
|> FreeBSD code to see what is available. *sigh*

Since POSIX says years before 1970 are undefined, it seems you are right.

Andreas.

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE GmbH, Deutschherrnstr. 15-19, D-90429 Nürnberg
Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[BUGS] Too many function calls in view with LEFT JOIN

2006-05-30 Thread Andreas Heiduk
Hello!

If a view which calls a function is LEFT JOINed to a table but not all
result rows are matched by some criteria, then the function is called 
for each row of the view nevertheless.

It is interesting, that the same query without using a view calls the
function only for those rows wich are really in the result set.

This discrepancy is of course Not-A-Good-Thing(tm) if the function has side
effects or is very expensive.

Note that this seems to happen only for left joins, not for a inner join.


The following example illustrates this by using a "noisy" function.

---
CREATE TABLE t1 (id int, t1val text);
CREATE TABLE t2 (id int, t2val int);
   
-- insert some test values
INSERT INTO t1 SELECT i, 'foo bar ' || i FROM generate_series(0, 20) i;
INSERT INTO t2 SELECT i, i*i FROM generate_series(0, 20) i;
   
-- create a noisy function
CREATE OR REPLACE FUNCTION notice(id int, val int) RETURNS int AS $$
BEGIN
RAISE NOTICE 'function called for (%, %)', id, val;
RETURN id;
END;
$$ LANGUAGE plpgsql;


-- direct query
SELECT t1.*, t2.*, notice(t2.id, t2.t2val)
FROM t1 LEFT JOIN t2 USING (id) WHERE id < 10;
-- result: only 10 NOTICE messages


-- same query with a view
CREATE VIEW t2v AS SELECT *, notice(id, t2val) FROM t2;
SELECT t1.*, t2v.* FROM t1 LEFT JOIN t2v USING (id) WHERE id < 10;
-- result: 20 NOTICE messages, 10 to much

---

I hope, this is really a bug and not something I didn't understand :-)

Best Regards
Andreas Heiduk

__
Verschicken Sie romantische, coole und witzige Bilder per SMS!
Jetzt bei WEB.DE FreeMail: http://f.web.de/?mc=021193


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [BUGS] Too many function calls in view with LEFT JOIN

2006-06-01 Thread Andreas Heiduk
Hello!


Tom Lane <[EMAIL PROTECTED]> wrote
> 
> Andreas Heiduk <[EMAIL PROTECTED]> writes:
> > If a view which calls a function is LEFT JOINed to a table but not all
> > result rows are matched by some criteria, then the function is called 
> > for each row of the view nevertheless.
> 
> > Note that this seems to happen only for left joins, not for a inner join.
> 
> I believe that's because the column is required to go to NULL in an
> unjoined row.  With a non-strict function, evaluating it after the join
> could yield wrong answers.  Try making the function strict.

First: setting the function to STRICT really cuts down the number of function 
calls even with the SELECT via view. 

But as far as I can tell both queries should always return the same results. So 
I don't understand why the STRICT does not matter in the first query but is 
necessary in the second one. Especially because the JOIN criterium is not 
affected by the function call.


Oh, and sorry that I forgot the version: "PostgreSQL 8.1.4 on 
i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.0.4 20060507 (prerelease) (Debian 
4.0.3-3)"

Best regards,
Andreas Heiduk


__
XXL-Speicher, PC-Virenschutz, Spartarife & mehr: Nur im WEB.DE Club!
Jetzt gratis testen! http://freemail.web.de/home/landingpad/?mc=021130


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[BUGS] Restoring 7.4 "pg_dumpall -o" output in 8.1 fails

2006-08-04 Thread Andreas Gustafsson
I recently attempted to upgrade a PostgreSQL 7.4 installation to 8.1 by
following the instructions at
<http://www.postgresql.org/docs/8.1/interactive/install-upgrading.html>,
having used "pg_dumpall -o" to dump the data in 7.4.  When I tried to
restore the data in 8.1, psql reported a large number of syntax
errors.

This happens even when restoring a trivial database cluster, such as
the empty one that exists immediately after installing.  Restoring
works fine if the data are dumped without "-o".

To reproduce (as a minimal test case):

   Install and start PostgreSQL 7.4.13

   # su pgsql -c 'pg_dumpall -o' >backup.pgdump-o

   Stop 7.4.3 and install and start 8.1.4

   # su pgsql -c 'psql -d postgres -f backup.pgdump-o'

This produces the output:

You are now connected to database "template1".
psql:/tmp/backup.pgdump-o:11: ERROR:  cannot delete from a view
HINT:  You need an unconditional ON DELETE DO INSTEAD rule.
psql:/tmp/backup.pgdump-o:19: ERROR:  cannot delete from a view
HINT:  You need an unconditional ON DELETE DO INSTEAD rule.
You are now connected to database "template1".
SET
SET
CREATE TABLE
psql:/tmp/backup.pgdump-o:44: ERROR:  table "pgdump_oid" does not have OIDs
psql:/tmp/backup.pgdump-o:46: invalid command \.
psql:/tmp/backup.pgdump-o:47: ERROR:  syntax error at or near "17145" at 
character 1
psql:/tmp/backup.pgdump-o:47: LINE 1: 17145 0
psql:/tmp/backup.pgdump-o:47: ^
SET
REVOKE
GRANT
SET
COMMENT
SET
COMMENT

When attempting to restore the actual production database there were
numerous additional syntax errors.  I can't show all of them (nor the
full database contents) because they contain confidential information,
but here are a few examples:

psql:backup:741: ERROR:  table "radacct" does not have OIDs
psql:backup:742: invalid command \.
psql:backup:750: ERROR:  table "radcheck" does not have OIDs
psql:backup:752: invalid command \.
psql:backup:760: ERROR:  syntax error at or near "17302" at character 1
psql:backup:760: LINE 1: 17302 1 gson Password == x

psql:backup:845: ERROR:  syntax error at or near "17306" at character 1
psql:backup:845: LINE 1: 17306 gson-frendit 0030bdfec250
psql:backup:845: ^
psql:backup:846: invalid command \N
psql:backup:847: invalid command \N
psql:backup:848: invalid command \N

This is on NetBSD-current, i386.
-- 
Andreas Gustafsson, [EMAIL PROTECTED]

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [BUGS] Restoring 7.4 "pg_dumpall -o" output in 8.1 fails

2006-08-07 Thread Andreas Gustafsson
Tom Lane wrote:
> Try making the dump from the 7.4 server using 8.1's pg_dump(all).
> 
> If you've already wiped the 7.4 installation then you'll have to deal
> with the incompatibilities yourself.  It looked like setting
> default_with_oids to true in the 8.1 server would help (although do you
> *really* need to preserve OIDs in your dump?  That -o switch is pretty
> well deprecated these days).  The "cannot delete from a view" is
> probably coming from an attempt to "DELETE FROM pg_shadow", which you
> can ignore.  You didn't show us any other problems.

What I did was to reinstall 7.4, which successfully restored the dump.

I don't think I actually need to preserve OIDs; I was using the -o
option out of habit, having been recommended to use it at some point.
In any case, that's not the point; I'm not looking for support or
workarounds, but simply to have the bug fixed. Whether or not I need
to preserve OIDs, the documented upgrade procedure for the case where
OIDs do need to preserved is not working, and that clearly is a bug in
either PostgreSQL itself or the documentation.
-- 
Andreas Gustafsson, [EMAIL PROTECTED]

---(end of broadcast)---
TIP 6: explain analyze is your friend


[BUGS] BUG #2598: Columns named R are not accessible - although R is not a keyword

2006-08-30 Thread Andreas Langegger

The following bug has been logged online:

Bug reference:  2598
Logged by:  Andreas Langegger
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1
Operating system:   Gentoo Linux 3.3.6, Kernel 2.6.16
Description:Columns named R are not accessible - although R is not a
keyword
Details: 

It seems that the column name "R" or "r" is reserved. If I want to insert
tuples I get the error message:

ERROR:  column "r" of relation "xyz" does not exist

And if I alter the column name to "a" it works. Again, renaming to "R" or
"r", same error. But it's not announced to be a name conflict / reserved
keyword...

Regards,
Andy

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [BUGS] BUG #2598: Columns named R are not accessible - although

2006-08-30 Thread Andreas Langegger
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Well, I also tried your simple regression test and it worked. The more
I'm wondering why this does not:

CREATE TABLE sunspots
(
  sunspots_key int4 NOT NULL,
- -- DEFAULT nextval('sunspot_seq'::regclass)
  date date NOT NULL,
  "time" time NOT NULL,
  groups int2,
  spots int2,
  seeing int2,
  ruhe int2,
  tinygrps int2,
  remarks varchar(85),
  "R" int2,
  groups_s int2,
  spots_s int2,
  scientist_id int4
- --  ,CONSTRAINT sunspot_pkey PRIMARY KEY (sunspots_key),
- --  CONSTRAINT fk_sunspot_scientist FOREIGN KEY (scientist_id)
- -- REFERENCES scientist (scientist_key) ON UPDATE NO ACTION ON
- -- DELETE NO ACTION,
- -- CONSTRAINT sunspot_seeing_fkey FOREIGN KEY (seeing) REFERENCES
- -- seeing_quality (seeing10) ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITHOUT OIDS;

insert into sunspots (sunspots_key, date, time, remarks, R) values
(9, '2006-08-30', '12:00:00', 'no comments', 30);

causes:
ERROR:  column "r" of relation "sunspots" does not exist

regards,
Andy

Tom Lane wrote:
> "Andreas Langegger" <[EMAIL PROTECTED]> writes:
>> It seems that the column name "R" or "r" is reserved.
> 
> Hardly.
> 
>> If I want to insert tuples I get the error message:
> 
>> ERROR:  column "r" of relation "xyz" does not exist
> 
> Worksforme:
> 
> regression=# create table xyz(r int);
> CREATE TABLE
> regression=# insert into xyz(r) values(42);
> INSERT 0 1
> 
> Perhaps you could provide a self-contained test case?
> 
>   regards, tom lane

- --
- --
Dipl.-Ing.(FH) Andreas Langegger
Institute of Applied Knowledge Processing
Johannes Kepler University Linz
A-4040 Linz, Altenberger Straße 69
> http://www.faw.at
> http://www.langegger.at
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.2 (GNU/Linux)

iD8DBQFE9bBQKk9SuaNc5+IRAt0IAKCo1h9uGjqChF2LE/4ab3U9/gnOkACgycdZ
3TRNTKNtsfI1B5iB5Lym5hA=
=ahUA
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [BUGS] backup + restore fails

2006-10-20 Thread Andreas Seltenreich
Holger Schoenen writes:

> ERROR:  invalid byte sequence for encoding "UTF8": 0xe46973
> Command was: --
[...]
> -- Started on 2006-09-15 14:56:51 Westeuropäische Normalzeit

The same problem was recently reported on the pgsql-de-allgemein list.

Would just avoiding %Z in Win32's strftime be an acceptable solution?
elog.c is already doing this, however because of the length of the
zone names, not the localization problem.  The attached patch is
completely untested because I don't have access to a win32 box.

regards,
andreas

Index: src/bin/pg_dump/pg_backup_archiver.c
===
RCS file: /projects/cvsroot/pgsql/src/bin/pg_dump/pg_backup_archiver.c,v
retrieving revision 1.137
diff -c -r1.137 pg_backup_archiver.c
*** src/bin/pg_dump/pg_backup_archiver.c14 Oct 2006 23:07:22 -  
1.137
--- src/bin/pg_dump/pg_backup_archiver.c20 Oct 2006 18:59:11 -
***
*** 2780,2785 
  {
charbuf[256];
  
!   if (strftime(buf, 256, "%Y-%m-%d %H:%M:%S %Z", localtime(&tim)) != 0)
ahprintf(AH, "-- %s %s\n\n", msg, buf);
  }
--- 2780,2793 
  {
charbuf[256];
  
!   if (strftime(buf, 256,
!/* Win32 timezone names are long and localized 
and
! * can interfere with utf-8 dumps */
! #ifndef WIN32
!"%Y-%m-%d %H:%M:%S %Z",
! #else
!"%Y-%m-%d %H:%M:%S",
! #endif
!localtime(&tim)) != 0)
ahprintf(AH, "-- %s %s\n\n", msg, buf);
  }
Index: src/bin/pg_dump/pg_dumpall.c
===
RCS file: /projects/cvsroot/pgsql/src/bin/pg_dump/pg_dumpall.c,v
retrieving revision 1.84
diff -c -r1.84 pg_dumpall.c
*** src/bin/pg_dump/pg_dumpall.c7 Oct 2006 20:59:05 -   1.84
--- src/bin/pg_dump/pg_dumpall.c20 Oct 2006 18:59:12 -
***
*** 1320,1325 
charbuf[256];
time_t  now = time(NULL);
  
!   if (strftime(buf, 256, "%Y-%m-%d %H:%M:%S %Z", localtime(&now)) != 0)
printf("-- %s %s\n\n", msg, buf);
  }
--- 1320,1333 
charbuf[256];
time_t  now = time(NULL);
  
!   if (strftime(buf, 256,
!/* Win32 timezone names are long and localized 
and
! * can interfere with utf-8 dumps */
! #ifndef WIN32
!"%Y-%m-%d %H:%M:%S %Z",
! #else
!"%Y-%m-%d %H:%M:%S",
! #endif
!localtime(&now)) != 0)
printf("-- %s %s\n\n", msg, buf);
  }

---(end of broadcast)---
TIP 6: explain analyze is your friend


[BUGS] 8.2bet2 failed build on Solaris 10 / x86-64 / SUN Studio 11

2006-11-03 Thread Andreas Lange
Hi,

I have problems building 8.2beta2 on a Solaris 10 x86-64 machine:

gmake[4]: Entering directory
`/files/dsk1/lsw/src/postgresql/postgresql-8.2beta2/src/backend/utils/adt'
/sw/sun-studio-11/SUNWspro/bin/cc -Xa -fast -fns=no -fsimple=1
-xtarget=opteron -xarch=amd64a -I../../../../src/include   -c -o float.o
float.c
"float.c", line 113: identifier redeclared: cbrt
current : static function(double) returning double
previous: function(double) returning double :
"/usr/include/iso/math_c99.h", line 126
cc: acomp failed for float.c
gmake[4]: *** [float.o] Error 2

This is the code in question:

#ifndef HAVE_CBRT
static double cbrt(double x);
#endif   /* HAVE_CBRT */

And here is from configure:

checking whether gettimeofday takes only one argument... no
checking for cbrt... no
checking for dlopen... yes

8.1.5 configured and built (with slock backported) on the same machine
finds cbrt and passes the 'gmake check' just fine.

Is this a known error or due to some intentional change that I've missed?

Regards,
   Andreas





---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [BUGS] 8.2bet2 failed build on Solaris 10 / x86-64 / SUN Studio

2006-11-03 Thread Andreas Lange




Alvaro Herrera wrote:

  Andreas Lange wrote:

  
  
And here is from configure:

checking whether gettimeofday takes only one argument... no
checking for cbrt... no

  
  
Undoubtely this is the problem.  Can you show the relevant config.log
extract?

  

Ok, here we go:

configure:13462: checking for cbrt
configure:13519: /sw/sun-studio-11/SUNWspro/bin/cc -Xa -o conftest
-fast -fns=no -fsimple=1 -xtarget=opteron -xarch=amd64a conftest.c
-lz -lrt -lsocket  >&5
"conftest.c", line 104: warning: statement not reached
Undefined   first referenced
 symbol in file
cbrt    conftest.o
ld: fatal: Symbol referencing errors. No output written to conftest
configure:13525: $? = 1
configure: failed program was:
| /* confdefs.h.  */
|
| #define PACKAGE_NAME "PostgreSQL"
| #define PACKAGE_TARNAME "postgresql"
| #define PACKAGE_VERSION "8.2beta2"
| #define PACKAGE_STRING "PostgreSQL 8.2beta2"
| #define PACKAGE_BUGREPORT "pgsql-bugs@postgresql.org"
| #define PG_VERSION "8.2beta2"
| #define DEF_PGPORT 5432
| #define DEF_PGPORT_STR "5432"
| #define PG_VERSION_STR "PostgreSQL 8.2beta2 on i386-pc-solaris2.10,
compiled by /sw/sun-studio-11/SUNWspro/bin/cc -Xa"
| #define PG_KRB_SRVNAM "postgres"
| #define PG_VERSION_NUM 80200
| #define HAVE_LIBZ 1
| #define HAVE_SPINLOCKS 1
| #define STDC_HEADERS 1
| #define HAVE_SYS_TYPES_H 1
| #define HAVE_SYS_STAT_H 1
| #define HAVE_STDLIB_H 1
| #define HAVE_STRING_H 1
| #define HAVE_MEMORY_H 1
| #define HAVE_STRINGS_H 1
| #define HAVE_INTTYPES_H 1
| #define HAVE_STDINT_H 1
| #define HAVE_UNISTD_H 1
| #define HAVE_CRYPT_H 1
| #define HAVE_GETOPT_H 1
| #define HAVE_IEEEFP_H 1
| #define HAVE_LANGINFO_H 1
| #define HAVE_POLL_H 1
| #define HAVE_PWD_H 1
| #define HAVE_SYS_IPC_H 1
| #define HAVE_SYS_POLL_H 1
| #define HAVE_SYS_RESOURCE_H 1
| #define HAVE_SYS_SELECT_H 1
| #define HAVE_SYS_SEM_H 1
| #define HAVE_SYS_SOCKET_H 1
| #define HAVE_SYS_SHM_H 1
| #define HAVE_SYS_TIME_H 1
| #define HAVE_SYS_UN_H 1
| #define HAVE_TERMIOS_H 1
| #define HAVE_UTIME_H 1
| #define HAVE_WCHAR_H 1
| #define HAVE_WCTYPE_H 1
| #define HAVE_NETINET_IN_H 1
| #define HAVE_NETINET_TCP_H 1
| #define HAVE_STRINGIZE 1
| #define HAVE_FUNCNAME__FUNC 1
| #define HAVE_TZNAME 1
| #define HAVE_STRUCT_SOCKADDR_UN 1
| #define HAVE_UNIX_SOCKETS 1
| #define HAVE_STRUCT_SOCKADDR_STORAGE 1
| #define HAVE_STRUCT_SOCKADDR_STORAGE_SS_FAMILY 1
| #define HAVE_STRUCT_ADDRINFO 1
| #define HAVE_STRUCT_OPTION 1
| #define HAVE_INT_TIMEZONE
| #define ACCEPT_TYPE_RETURN int
| #define ACCEPT_TYPE_ARG1 int
| #define ACCEPT_TYPE_ARG2 struct sockaddr *
| #define ACCEPT_TYPE_ARG3 int
| /* end confdefs.h.  */
| /* Define cbrt to an innocuous variant, in case 
declares cbrt.
|    For example, HP-UX 11i  declares gettimeofday.  */
| #define cbrt innocuous_cbrt
|
| /* System header to define __stub macros and hopefully few prototypes,
| which can conflict with char cbrt (); below.
| Prefer  to  if __STDC__ is
defined, since
|  exists even on freestanding compilers.  */
|
| #ifdef __STDC__
| # include 
| #else
| # include 
| #endif
|
| #undef cbrt
|
| /* Override any gcc2 internal prototype to avoid an error.  */
| #ifdef __cplusplus
| extern "C"
| {
| #endif
| /* We use char because int might match the return type of a gcc2
|    builtin and then its argument prototype would still apply.  */
| char cbrt ();
| /* The GNU C library defines this for functions which it implements
| to always fail with ENOSYS.  Some functions are actually named
| something starting with __ and the normal name is an alias.  */
| #if defined (__stub_cbrt) || defined (__stub___cbrt)
| choke me
| #else
| char (*f) () = cbrt;
| #endif
| #ifdef __cplusplus
| }
| #endif
|
| int
| main ()
| {
| return f != cbrt;
|   ;
|   return 0;
| }
configure:13550: result: no





Re: [BUGS] 8.2bet2 failed build on Solaris 10 / x86-64 / SUN Studio

2006-11-03 Thread Andreas Lange




Alvaro Herrera wrote:

  Andreas Lange wrote:

  
  
configure:13462: checking for cbrt
configure:13519: /sw/sun-studio-11/SUNWspro/bin/cc -Xa -o conftest -fast
-fns=no -fsimple=1 -xtarget=opteron -xarch=amd64a conftest.c -lz
-lrt -lsocket  >&5
"conftest.c", line 104: warning: statement not reached
Undefined   first referenced
 symbol in file
cbrtconftest.o
ld: fatal: Symbol referencing errors. No output written to conftest
configure:13525: $? = 1
configure: failed program was:

  
  
Huh, long shot: maybe cbrt is a macro on that platform?

Can you find where and how is cbrt declared and defined on your system
headers?

  

I don't think that is the issue since 8.1.5 works with the same
env/configure arguments. I began to suspect that I was chasing the
symptoms and not the cause, making me diff the conftest from 8.1.5 and
8.2b2:

--- conftest.cbrt_8_1.c fre nov  3 16:14:40 2006
+++ conftest.cbrt_8_2.c fre nov  3 16:12:05 2006
@@ -2,20 +2,15 @@

 #define PACKAGE_NAME "PostgreSQL"
 #define PACKAGE_TARNAME "postgresql"
-#define PACKAGE_VERSION "8.1.5"
-#define PACKAGE_STRING "PostgreSQL 8.1.5"
+#define PACKAGE_VERSION "8.2beta2"
+#define PACKAGE_STRING "PostgreSQL 8.2beta2"
 #define PACKAGE_BUGREPORT "pgsql-bugs@postgresql.org"
-#define PG_VERSION "8.1.5"
+#define PG_VERSION "8.2beta2"
 #define DEF_PGPORT 5432
 #define DEF_PGPORT_STR "5432"
-#define PG_VERSION_STR "PostgreSQL 8.1.5 on i386-pc-solaris2.10,
compiled by /sw/sun-studio-11/SUNWspro/bin/cc -Xa"
+#define PG_VERSION_STR "PostgreSQL 8.2beta2 on i386-pc-solaris2.10,
compiled by /sw/sun-studio-11/SUNWspro/bin/cc -Xa"
 #define PG_KRB_SRVNAM "postgres"
-#define HAVE_LIBM 1
-#define HAVE_LIBDL 1
-#define HAVE_LIBNSL 1
-#define HAVE_LIBSOCKET 1
-#define HAVE_LIBGEN 1
-#define HAVE_LIBRESOLV 1
+#define PG_VERSION_NUM 80200
 #define HAVE_LIBZ 1
 #define HAVE_SPINLOCKS 1
 #define STDC_HEADERS 1
@@ -36,6 +31,7 @@
 #define HAVE_PWD_H 1
 #define HAVE_SYS_IPC_H 1
 #define HAVE_SYS_POLL_H 1
+#define HAVE_SYS_RESOURCE_H 1
 #define HAVE_SYS_SELECT_H 1
 #define HAVE_SYS_SEM_H 1
 #define HAVE_SYS_SOCKET_H 1
@@ -57,7 +53,6 @@
 #define HAVE_STRUCT_SOCKADDR_STORAGE_SS_FAMILY 1
 #define HAVE_STRUCT_ADDRINFO 1
 #define HAVE_STRUCT_OPTION 1
-#define HAVE_DECL_F_FULLFSYNC 0
 #define HAVE_INT_TIMEZONE
 #define ACCEPT_TYPE_RETURN int
 #define ACCEPT_TYPE_ARG1 int

Huh? No LIBM?

> cc conftest.cbrt_8_2.c
"conftest.cbrt_8_2.c", line 104: warning: statement not reached
Undefined   first referenced
 symbol in file
cbrt    conftest.cbrt_8_2.o
ld: fatal: Symbol referencing errors. No output written to a.out
> cc -lm conftest.cbrt_8_2.c
"conftest.cbrt_8_2.c", line 104: warning: statement not reached
>

So, it seems I need '-lm', but that is no longer tested in configure.


   //Andreas






Re: [BUGS] 8.2bet2 failed build on Solaris 10 / x86-64 / SUN Studio

2006-11-03 Thread Andreas Lange




Tom Lane wrote:

  
configure:13462: checking for cbrt
configure:13519: /sw/sun-studio-11/SUNWspro/bin/cc -Xa -o conftest -fast
-fns=no -fsimple=1 -xtarget=opteron -xarch=amd64a conftest.c -lz
-lrt -lsocket  >&5
"conftest.c", line 104: warning: statement not reached
Undefined   first referenced
 symbol in file
cbrtconftest.o
ld: fatal: Symbol referencing errors. No output written to conftest

  
  
Presumably the problem is that the cc call lacks "-lm".
  

Indeed. Just took me a bit longer to get that. :-)

  Checking back against 8.1, I see that 8.1's configure has

	AC_CHECK_LIB(m, main)

where 8.2 tries to do

	AC_SEARCH_LIBS(pow, m)

I suppose there is something funny about pow() on your platform
causing that probe to fail.  What does config.log have at the
"checking for library containing pow" step?
  

configure:5168: checking for library containing pow
configure:5198: /sw/sun-studio-11/SUNWspro/bin/cc -Xa -o conftest -fast
-fns=no -fsimple=1 -xtarget=opteron -xarch=amd64a conftest.c 
>&5
configure:5204: $? = 0
configure:5208: test -z 
             || test ! -s conftest.err
configure:5211: $? = 0
configure:5214: test -s conftest
configure:5217: $? = 0
configure:5287: result: none required


  regards,
 Andreas Lange




Re: [BUGS] 8.2bet2 failed build on Solaris 10 / x86-64 / SUN Studio

2006-11-14 Thread Andreas Lange
Zdenek Kotala wrote:

>
> Main problem is -fast switch. It modifies behavior of floating point
> operation (it is reason why It is not good option for postgres) and
> use another floating point libraries and some function are inlined. It
> is reason why pow test passed with -fast switch without -lm switch.
>
> Detail description of -fast you can found on
> http://docs.sun.com/source/819-3688/cc_ops.app.html
> 

I noticed that the Sun FAQ now has changed from hinting that -fast might
be very beneficial to recomend staying away from it.

Using -fast is an old habit, has been building with it for years. I've
seen that the testsuite breaks (in date/time) with only -fast, but it
seems the only option one has to disable to normalize floating point
enough is -fns. I hope passing the testsuite really means that  fp math
behaves correctly. If  I'm wrong about that, I'll have to change our
build routine.

Beeing lazy, it is a good bit easier to go with -fast and turn of the
problematic optimization with:
-fast -fns=no
than expanding the -fast macro and having to add all parameters:
-dalign -nofstore -fsimple=2 -fsingle -xalias_level=basic -native 
-xdepend -xlibmil -xlibmopt -xO5 -xregs=frameptr

I do understand the recomendation to avoid -fast, the tweaking is both
compiler version and hardware architecture dependant. Doing a make check
is always advisable.

   regards,
  Andreas

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[BUGS] BUG #3280: wrong german-translation

2007-05-15 Thread Andreas Sakowski

The following bug has been logged online:

Bug reference:  3280
Logged by:  Andreas Sakowski
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.2
Operating system:   debian-linux
Description:wrong german-translation
Details: 

If you truncate a table with a foreign key constraint there comes a error
that this table can not truncate unless you use truncate ... cascade. If you
use truncate ... cascade then comes a message 'Truncate-Vorgang löscht
ebenfalls Tabelle »...«'. löscht is like drop, and this is wrong. The
correct sentence must to be 'Truncate-Vorgang leert ebenfalls Tabelle
»...«'.

Andreas

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[BUGS] BUG #3716: utf8 crash with replace

2007-11-02 Thread Andreas Pflug

The following bug has been logged online:

Bug reference:  3716
Logged by:  Andreas Pflug
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.2.4
Operating system:   Linux,Windows
Description:utf8 crash with replace
Details: 

If replace is used with UTF8 client_encoding, searching for a malformed
string, the backend will crash.

SELECT replace('Ärger', chr(195), 'Ae')

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [BUGS] BUG #3716: utf8 crash with replace

2007-11-08 Thread Andreas Pflug
Tom Lane wrote:
> "Andreas Pflug" <[EMAIL PROTECTED]> writes:
>   
>> If replace is used with UTF8 client_encoding, searching for a malformed
>> string, the backend will crash.
>> 
>
>   
>> SELECT replace('Ärger', chr(195), 'Ae')
>> 
>
> I see no crash here.  Maybe it's been fixed since 8.2.4?  Please try
> 8.2.5.  If you still get a crash, could you send a stack trace?
>   
Ok, apparently fixed in 8.2.5.

Regards,
Andreas


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[BUGS] [PATCH] Don't bail with legitimate -N/-B options

2008-02-16 Thread Andreas Kling

Greetings,

Starting PostgreSQL 8.3.0 with the default options used by Gentoo Linux 
(-N 40 -B 80) causes it to bail with an error message.


"the number of buffers (-B) must be at least twice the number of allowed 
connections (-N) and at least 16"


The problem is that NBuffers is actually "max autovacuum connections" + 
NBuffers.


My attached patch fixes this by adding "max autovacuum connections" * 2 
to NBuffers before the check.


Best regards,
Andreas Kling
ACG Nyström AB
Index: src/backend/postmaster/postmaster.c
===
--- src/backend/postmaster/postmaster.c	(revision 30129)
+++ src/backend/postmaster/postmaster.c	(working copy)
@@ -685,6 +685,9 @@ PostmasterMain(int argc, char *argv[])
 	/* And switch working directory into it */
 	ChangeToDataDir();
 
+	/* Add buffers to accomodate backends "reserved" for autovacuum */
+	NBuffers += autovacuum_max_workers * 2;
+
 	/*
 	 * Check for invalid combinations of GUC settings.
 	 */

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [BUGS] [PATCH] Don't bail with legitimate -N/-B options

2008-02-16 Thread Andreas Kling

Magnus Hagander wrote:
Anybody know *why* Gentoo does such a thing? Having shared buffers at 
the very lowest possible boundary just seems counterproductive.  Plus, 
the normal way to set these things would be in postgresql.conf, why 
override them on the commandline?


It's not the first time I've seen people complain about this, it'd be 
good to know why.
It's been brought up on the Gentoo bugzilla 
(http://bugs.gentoo.org/show_bug.cgi?id=206725), so hopefully something 
will come of that.


Those are not comments on the actual patch, of course. For that one, 
it looks to me like it's the wrong fix. I don't think we should be 
adding to shared buffers like that - if somebody asked for a specific 
value they should get that. But in that case the error message needs 
to be changed, since it's misleading.

If we follow that logic, there shouldn't be an error message at all. ;-)

Cheers,
Andreas

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[BUGS] BUG #4252: SQL stops my graic cards

2008-06-18 Thread Andreas Andersson

The following bug has been logged online:

Bug reference:  4252
Logged by:  Andreas Andersson
Email address:  [EMAIL PROTECTED]
PostgreSQL version: SQL 2.8
Operating system:   Vista SP1
Description:SQL stops my graic cards
Details: 

Post this bug on pockertracker forum:
http://www.pokertracker.com/forums/viewtopic.php?f=29&t=7907



--
 have used PT2, SQL and Vista for and long time and everything has worked
fine with dual screens on GeForce 8800
I woke up this sunday and one screen was black. Then only thing that happend
was that SP1 had been download/installed. I spent 10 hours on uninstalling,
repair windows, and finaly format c: then it all statred to work again, I
installed PT2 , SQL and SP1, sucess until i reboot, and quess what, one
screen goes black again.

On monday I left my computer to a shop, they change the grafic card to MSI
NX 8800 GTS. Installed vista whit SP1 and told me it works perfect with dual
screens. cost me 500$
This morning I picked up my computer and everything was running good untill,
the first thing I did, was installing PT2 and SQL, guess what, one screen
turns black after reboot.

Then I get it, don't know how but SQL 2.8 fuck up my grafic card with vista
SP1. I unistall SQL 2.8 and both screens, reboot and both screens works
again!!
But I still want PT2 and SQL on my poker computer so I try so i try to
install latest SQL 3.3 and everythings seems to be good for I while but then
one screen black again and my computer freaks out.

This is so wierd., and now i´m on format c: for the 5:e time and installing
vista again. 

But I will never install SQL again untill I get a good answer here. I have
spent like 2-4 days on this problem now and is so sick., I´m sure that
Vista SP1 whit SQL = problems whith = Gefroce 8800 ot MSI NX 8800 GTS

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] DELETE permission problems on table with long name after upgradeto 7.3.1

2003-01-23 Thread Andreas Lange
Hi!

Our admin upgraded PostgreSQL 7.2.3 to 7.3.1 a while ago (dump and 
restore).

Previously I had a table with a long name (>32 chars), which at creation 
was truncated to proper length. The table was used from a perl-script 
(using the full name) doing mostly DELETE's and INSERT's. The user 
accessing the table had been granted SELECT, UPDATE and INSERT.

After the upgrade the calls to the table didn't truncate the name any 
longer (as expected), and thus didn't find the table. So I renamed the 
table to it's full length name, and now the scripts did find the table 
again. But - deletes were no longer allowed! Selects are ok, and so are 
inserts and updates, but delete gives "permission denied".

I've verified that grants are the same as before. Altering the table 
name to the previous short version gives the same result.

Putting away the data in a temp table, dropping the problematic one and 
recreating it made the whole thing work again.

Below is the declaration used (before and after the upgrade):


Regards,
   Andreas



drop table gradsite_dev_course_classification;
create table gradsite_dev_course_classification (
   cid integer
 references gradsite_dev_courses (cid)
 on delete cascade,
   class integer
 references gradsite_dev_course_classes (class)
 on delete cascade,
   primary key (cid, class)
);

create index gradsite_dev_course_classification_cid on 
gradsite_course_classification (cid);

create index gradsite_dev_course_classification_class on 
gradsite_course_classification (class);

grant select,insert,update,delete on gradsite_dev_course_classification 
to gradsite;







---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[BUGS] problem with libpq7.3.3/encrypted

2003-07-24 Thread Andreas Pflug
I believe we have a problem with libpq 7.3.3 with encryption. I 
currently don't have a 7.3.3/ssl system running, but this has been 
reported by Christian Ruediger Bahls, who traced it, and Dave Page.

When running pgAdmin3 over a ssl encrypted connection, it will hang, 
even for very small queries with a single result. As soon as 
postgresql.conf ssl=false is set, or a 7.3.2 non-encrypting libpq is 
used, everything's working fine. There's a complete gdb dump at

http://dmv.math.uni-rostock.de/downloads/gdb_trace.txt

The query string is truncated in that log, it should read like this:

SELECT rw.oid, rw.ev_class, rulename, relname, nspname, description, is_instead, 
ev_type, ev_action, ev_qual
  pg_get_ruledef(rw.oid) AS definition
 FROM pg_rewrite rw
 JOIN pg_class cl ON cl.oid=rw.ev_class
 JOIN pg_namespace nsp ON nsp.oid=cl.relnamespace
 LEFT OUTER JOIN pg_description des ON des.objoid=rw.oid
WHERE ev_class = 12345
ORDER BY rulename


Regards,
Andreas
Christian Ruediger Bahls wrote:
it could be because of the encryption layer
that postgresql uses .. 
i think this is because i use the encrypted layer
an uncomplete stacktrace i have from one
of these runs:
#0  0x407659c4 in read () from /lib/libc.so.6
#1  0x4066db50 in __JCR_LIST__ () from /lib/libpthread.so.0
#2  0x40668f53 in read () from /lib/libpthread.so.0
#3  0x400c31cc in BIO_new_socket () from /usr/lib/libcrypto.so.0.9.7
#4  0x000a in ?? ()
#5  0x08aec778 in ?? ()
#6  0x0005 in ?? ()
#7  0x400c31b6 in BIO_new_socket () from /usr/lib/libcrypto.so.0.9.7
#8  0x4013bc60 in KRB5_AUTHENT_it () from /usr/lib/libcrypto.so.0.9.7
#9  0x08ac3e48 in ?? ()

but still i guess i cant trust these traces
as i got stuff like that as well:
#0  0x40667a71 in __errno_location () from /lib/libpthread.so.0
#1  0x40032430 in ssl3_read () from /usr/lib/libssl.so.0.9.7
#2  0x4004c130 in ssl3_undef_enc_method () from /usr/lib/libssl.so.0.9.7
#3  0x in ?? ()
#4  0x08ad6b80 in ?? ()
#5  0xbfffee58 in ?? ()
#6  0x4004c130 in ssl3_undef_enc_method () from /usr/lib/libssl.so.0.9.7
#7  0x08b77e38 in ?? ()
#8  0xbfffee18 in ?? ()
#9  0x4003b7ea in SSL_read () from /usr/lib/libssl.so.0.9.7
#10 0x08ac0ed8 in ?? ()
#11 0x08b814f6 in ?? ()
#12 0x6942 in ?? ()
#13 0x4003b795 in SSL_read () from /usr/lib/libssl.so.0.9.7
#14 0x40153860 in pg_enc2name_tbl () from /usr/lib/libpq.so.3
#15 0x in ?? ()
#16 0x08ad6b80 in ?? ()
#17 0x40717438 in realloc () from /lib/libc.so.6
And Dave wrote:

Interestingly, I've just tried this on Linux, and when viewing
pg_description, the empty edit grid appears with 'Refreshing data,
please wait...' in the task bar and then hangs. 

The pg console shows:

LOG: SSL SYSCALL error: EOF detected
LOG: pq_recvbuf: Unexpected EOF on client connection
 

Regards,
Andreas




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [BUGS] problem with libpq/encrypted

2003-07-28 Thread Andreas Pflug
This problem is happening with libpq 7.4 too.

I believe we have a problem with libpq 7.3.3 with encryption. I 
currently don't have a 7.3.3/ssl system running, but this has been 
reported by Christian Ruediger Bahls, who traced it, and Dave Page.

When running pgAdmin3 over a ssl encrypted connection, it will hang, 
even for very small queries with a single result. As soon as 
postgresql.conf ssl=false is set, or a 7.3.2 non-encrypting libpq is 
used, everything's working fine. There's a complete gdb dump at

http://dmv.math.uni-rostock.de/downloads/gdb_trace.txt

The query string is truncated in that log, it should read like this:

SELECT rw.oid, rw.ev_class, rulename, relname, nspname, description, 
is_instead, ev_type, ev_action, ev_qual
  pg_get_ruledef(rw.oid) AS definition
 FROM pg_rewrite rw
 JOIN pg_class cl ON cl.oid=rw.ev_class
 JOIN pg_namespace nsp ON nsp.oid=cl.relnamespace
 LEFT OUTER JOIN pg_description des ON des.objoid=rw.oid
WHERE ev_class = 12345
ORDER BY rulename



Regards,
Andreas
Christian Ruediger Bahls wrote:

it could be because of the encryption layer
that postgresql uses .. i think this is because i use the encrypted 
layer
an uncomplete stacktrace i have from one
of these runs:
#0  0x407659c4 in read () from /lib/libc.so.6
#1  0x4066db50 in __JCR_LIST__ () from /lib/libpthread.so.0
#2  0x40668f53 in read () from /lib/libpthread.so.0
#3  0x400c31cc in BIO_new_socket () from /usr/lib/libcrypto.so.0.9.7
#4  0x000a in ?? ()
#5  0x08aec778 in ?? ()
#6  0x0005 in ?? ()
#7  0x400c31b6 in BIO_new_socket () from /usr/lib/libcrypto.so.0.9.7
#8  0x4013bc60 in KRB5_AUTHENT_it () from /usr/lib/libcrypto.so.0.9.7
#9  0x08ac3e48 in ?? ()

but still i guess i cant trust these traces
as i got stuff like that as well:
#0  0x40667a71 in __errno_location () from /lib/libpthread.so.0
#1  0x40032430 in ssl3_read () from /usr/lib/libssl.so.0.9.7
#2  0x4004c130 in ssl3_undef_enc_method () from /usr/lib/libssl.so.0.9.7
#3  0x in ?? ()
#4  0x08ad6b80 in ?? ()
#5  0xbfffee58 in ?? ()
#6  0x4004c130 in ssl3_undef_enc_method () from /usr/lib/libssl.so.0.9.7
#7  0x08b77e38 in ?? ()
#8  0xbfffee18 in ?? ()
#9  0x4003b7ea in SSL_read () from /usr/lib/libssl.so.0.9.7
#10 0x08ac0ed8 in ?? ()
#11 0x08b814f6 in ?? ()
#12 0x6942 in ?? ()
#13 0x4003b795 in SSL_read () from /usr/lib/libssl.so.0.9.7
#14 0x40153860 in pg_enc2name_tbl () from /usr/lib/libpq.so.3
#15 0x in ?? ()
#16 0x08ad6b80 in ?? ()
#17 0x40717438 in realloc () from /lib/libc.so.6
And Dave wrote:

Interestingly, I've just tried this on Linux, and when viewing
pg_description, the empty edit grid appears with 'Refreshing data,
please wait...' in the task bar and then hangs.
The pg console shows:
LOG: SSL SYSCALL error: EOF detected
LOG: pq_recvbuf: Unexpected EOF on client connection
 

Regards,
Andreas




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if 
your
 joining column's datatypes do not match



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [BUGS] PG 7.3.1 with ssl on linux hangs (testcase available)

2003-08-01 Thread Andreas Pflug
Tom Lane wrote:

=?ISO-8859-1?Q?Hans-J=FCrgen?= Hay <[EMAIL PROTECTED]> writes:
 

The hangs happen only when I connect via network unsing psql -h
localhost template1
And the problem is gone, when ssl is switched off in postgresql.conf
   

We have heard a couple of reports of problems with SSL connections,
but so far I've not been able to reproduce any trouble here.  Details
about your SSL installation might help.
 

Hi Tom,

good to hear that's not a singular problem with pgadmin3...
I currently don't have a system to reproduce this, because I didn't 
succeed making my 7.4 server accepting ssl-connections. I posted for 
installation help on pgsql-admin but didn't get a reaction so far. RTFM 
up and down I couldn't see what I configured wrong (--with-openssl, 
ssl=true). Can you give me a hint? If I can reproduce the problem, I 
might be able to trace down that ssl thing.

Regards,
Andreas
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [BUGS] PG 7.3.1 with ssl on linux hangs (testcase available)

2003-08-01 Thread Andreas Pflug
Tom Lane wrote:

Andreas Pflug <[EMAIL PROTECTED]> writes:
 

I currently don't have a system to reproduce this, because I didn't 
succeed making my 7.4 server accepting ssl-connections. I posted for 
installation help on pgsql-admin but didn't get a reaction so far. RTFM 
up and down I couldn't see what I configured wrong (--with-openssl, 
ssl=true). Can you give me a hint?
   

No, I didn't see what you did wrong either.  Try looking in the
postmaster log to see if any useful error messages appear.
I pushed client_min_messages and log_min_messages to debug5, and the 
only suspicious message at startup is

LOG:  could not load root cert file "/usr/data/pgsql-7.4/root.crt": No 
such file or directory
DETAIL:  Will not verify client certificates.

which shouldn't be a problem (pg_hba.conf is configured to trust).

Any connect attempt will log
DEBUG:  forked new backend, pid=1826 socket=8
DEBUG:  proc_exit(0)
DEBUG:  shmem_exit(0)
DEBUG:  exit(0)
DEBUG:  reaping dead processes
DEBUG:  child process (pid 1826) exited with exit code 0
with the client side message "server does not support SSL, but SSL was 
required". OpenSSL is 0.9.6g

Regards,
Andreas


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [BUGS] PG 7.3.1 with ssl on linux hangs (testcase available)

2003-08-01 Thread Andreas Pflug
Tom Lane wrote:

Bizarre.  I looked through the recent sslmode patch, and while I found
some things I didn't like, none of them explain this.  Could you perhaps
get a trace of the server-to-client interaction?  Either strace'ing psql
or watching the IP traffic with a packet sniffer should do --- but make
sure you get the full contents of each packet sent and received.
 

Hi Tom,

the attached file is a tcpdump of a connect attempt.

Regards,
Andreas
00:14:21.128861 localhost.32821 > localhost.postgresql: F 2173273900:2173273900(0) ack 
2168553850 win 32767  (DF)
 4500 0034 0340 4000 4006 3982 7f00 0001
 7f00 0001 8035 1538 8189 872c 8141 817a
 8011 7fff 0dd5  0101 080a 0001 a841
 0001 a1c2
00:14:21.131604 localhost.postgresql > localhost.32821: F 1:1(0) ack 1 win 32767 
 (DF)
 4500 0034 0341 4000 4006 3981 7f00 0001
 7f00 0001 1538 8035 8141 817a 8189 872d
 8011 7fff 0755  0101 080a 0001 a841
 0001 a841
00:14:21.131635 localhost.32821 > localhost.postgresql: . ack 2 win 32767 
 (DF)
 4500 0034 0342 4000 4006 3980 7f00 0001
 7f00 0001 8035 1538 8189 872d 8141 817b
 8010 7fff 0755  0101 080a 0001 a841
 0001 a841
00:14:21.132540 localhost.32822 > localhost.postgresql: S 2177528657:2177528657(0) win 
32767  (DF)
 4500 003c 0343 4000 4006 3977 7f00 0001
 7f00 0001 8036 1538 81ca 7351  
 a002 7fff 5ce0  0204 400c 0402 080a
 0001 a841   0103 0300
00:14:21.132565 localhost.postgresql > localhost.32822: S 2176273229:2176273229(0) ack 
2177528658 win 32767  (DF)
 4500 003c 0344 4000 4006 3976 7f00 0001
 7f00 0001 1538 8036 81b7 4b4d 81ca 7352
 a012 7fff e787  0204 400c 0402 080a
 0001 a841 0001 a841 0103 0300
00:14:21.132582 localhost.32822 > localhost.postgresql: . ack 1 win 32767 
 (DF)
 4500 0034 0345 4000 4006 397d 7f00 0001
 7f00 0001 8036 1538 81ca 7352 81b7 4b4e
 8010 7fff 50a5  0101 080a 0001 a841
 0001 a841
00:14:21.132638 localhost.32822 > localhost.postgresql: P 1:9(8) ack 1 win 32767 
 (DF)
 4500 003c 0346 4000 4006 3974 7f00 0001
 7f00 0001 8036 1538 81ca 7352 81b7 4b4e
 8018 7fff 358c  0101 080a 0001 a841
 0001 a841  0008 04d2 162f
00:14:21.132647 localhost.postgresql > localhost.32822: . ack 9 win 32767 
 (DF)
 4500 0034 0347 4000 4006 397b 7f00 0001
 7f00 0001 1538 8036 81b7 4b4e 81ca 735a
 8010 7fff 509d  0101 080a 0001 a841
 0001 a841
00:14:21.132833 localhost.filenet-tms > localhost.filenet-tms: udp 24
 6000  0020 1140    
    0001    
    0001 8000 8000 0020 4aa6
 0200  1800    9b05 
    
00:14:21.134264 localhost.postgresql > localhost.32822: P 1:2(1) ack 9 win 32767 
 (DF)
 4500 0035 0348 4000 4006 3979 7f00 0001
 7f00 0001 1538 8036 81b7 4b4e 81ca 735a
 8018 7fff 0294  0101 080a 0001 a841
 0001 a841 4e
00:14:21.134284 localhost.32822 > localhost.postgresql: . ack 2 win 32767 
 (DF)
 4500 0034 0349 4000 4006 3979 7f00 0001
 7f00 0001 8036 1538 81ca 735a 81b7 4b4f
 8010 7fff 509c  0101 080a 0001 a841
 0001 a841

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [BUGS] PG 7.3.1 with ssl on linux hangs (testcase available)

2003-08-01 Thread Andreas Pflug
Hi Tom,

reducing the traced conversation to its essence it's just a ssl-request 
to a non-ssl server :-(

localhost.32822 > localhost.postgresql:  0008 04d2 162f
 

localhost.postgresql > localhost.32822:  4e

And no hints from serverlog.
Are there some more debugging options that I could enable in the backend 
to be a bit more verbose about the communication process?

Regards,
Andreas


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [BUGS] PG 7.3.1 with ssl on linux hangs (testcase available)

2003-08-01 Thread Andreas Pflug
Tom Lane wrote:

Hm.  The postmaster is sending back 'N' indicating that it does not want
to do SSL.
Are you sure you are connecting to an SSL-enabled postmaster?

Also, is your connection by any chance IPV6?  It doesn't look like it
from the tcpdump, but I'm not sure I know how to tell.
Ah, that's it! My code examination just had reached 
ProcessStartupPacket, when your mail arrived.
The kernel is IPV6, and incoming IPV4 connection will be handled IPV6; I 
can tell this from pg_hba.conf experiments that my installation requires 
the :::192.168.0.0/24 entry.

The relevant bit of code in the postmaster is

#ifdef USE_SSL
   /* No SSL when disabled or on Unix sockets */
   if (!EnableSSL || port->laddr.addr.ss_family != AF_INET)
   SSLok = 'N';
   else
   SSLok = 'S'; /* Support for SSL */
#else
   SSLok = 'N'; /* No support for SSL */
#endif
According to the comment, it should be

if (!EnableSSL || port->laddr.addr.ss_family == AF_UNIX)

and after changing this it works! Quite hard to find, probably most 
users you'd ask "do you use IPV6" would have answered "no". I didn't opt 
vor IPV6 either, it's a stock SuSE 8.1.

Immediately, I checked the reported ssl hangs, and found both cases 
working. Dave reported this failing on snake.pgadmin.org, which uses a 
different openssl (0.9.7a), I'll see if I can check that version (won't 
be before monday, need some sleep now).

Regards,
Andreas
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[BUGS] ALTER SCHEMA problem

2003-08-14 Thread Andreas Hinz
If PostgreSQL failed to compile on your computer or you found a bug that
is likely to be specific to one platform then please fill out this form
and e-mail it to [EMAIL PROTECTED]

To report any other bug, fill out the form below and e-mail it to
[EMAIL PROTECTED]

If you not only found the problem but solved it and generated a patch
then e-mail it to [EMAIL PROTECTED] instead.  Please use the
command "diff -c" to generate the patch.

You may also enter a bug report at http://www.postgresql.org/ instead of
e-mail-ing this form.

=
===POSTGRESQL BUG REPORT TEMPLATE
=
===


Your name       : Andreas Hinz  
Your email address  : [EMAIL PROTECTED]


System Configuration
-
  Architecture (example: Intel Pentium) : Intel Pentium

  Operating System (example: Linux 2.0.26 ELF)  : Linux 2.4.21 ELF

  PostgreSQL version (example: PostgreSQL-7.3):   PostgreSQL-7.4beta1

  Compiler used (example:  gcc 2.95.2)  : gcc 3.2.3


Please enter a FULL description of your problem:


Hi,
I am not absolutly sure this is a bug, but consider this:

I am about to create a database with 5 schemas each containing about 70
tables. Importing data via "psql  -f .

After import I rename the schema "public" to eg. "base1", create a 
new schema "public", import the next database etc.

Now the problem is I yse the datatype "serial" which creates then
constraint "default nextval('public.abc_sew'::test)".

When renaming the schema from "public" to "base1" all indexes and
seqenses are renames correct, but not the above "public." in the
constraint.


Please describe a way to repeat the problem.   Please try to provide a
concise reproducible example, if at all possible: 
--

createdb test
psql test
CREATE TABLE ta1 (f1 serial, f2 integer);
ALTER SCHEMA public RENAME TO base1;
\d base1.*


If you know how this problem might be fixed, list the solution below:
-


Only by manual "ALTER TABLE ta1 ALTER f1 SET DEFAULT  etc.

But doing this for 5 schemas each having 70 tables is somewhat stupud.

Even via a seperate file with all the "ALTER" is no solution as this is
an unfineshed project with frequent changes on the tables and thus
possible changes in this file.


A posibility to select a default schema with eg. "SET" on import would be
a really nice feature:

SET DEFAULT SCHEMA base1;

CREATE TABLE 

COPY FROM stdin 

etc.

-- 
Med venlig hilsen / Best regards / Mit freundlichen Grüssen

Andreas Hinz

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [BUGS] Database Grants Bug

2003-08-18 Thread Andreas Pflug
Marcus England wrote:

On Mon, 2003-08-18 at 09:29, Joe Conway wrote:
 

Marcus England wrote:
   

 

Again, I don't know what your definition of "most, if not all other 
DBMS's" is, but a quick read through my MSSQL2000 manual indicates SQL 
Server is no different from Postgres in this regard. Same for Oracle 9i. 
I'd say that covers the majority of DBMS installations. I don't have a 
DB2 manual handy to check.
   

I guess I meant the ability to grant permissions easily at the DB level.
It's trivial in SQL Server via Enterprise Manager - no SQL needed. I
assume DB2 and Oracle have similar facilities, not necessarily in SQL.
Perhaps pgadmin has this ability?
 

AFAIR pgAdmin2 does have a grant utility for this. pgAdmin3 has this on 
the TODO for the next version.

Regards,
Andreas
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [BUGS] can't link the libpq.dll for bcc32.mak

2003-08-26 Thread Andreas Pflug
Ping-Hua Shao wrote:

> Dear:
> I try to compile the libpq library (in 7.3.4 & 7.4 beta1 src folder)
> with bcc32.mak under bcc55 and bcb6, but have some problems when linked.
> The problems are about :
> 1._pqGethostbyname
> 2._pqStrerror
> can't referenced.


There are already patches for this on the way; please look at
pgsql-patches of the last days.

Regards,
Andreas


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [BUGS] session variable

2003-09-02 Thread Andreas Pflug
sad wrote:

is it possible to define user variables in session ?
If not then is it planned to implement ?
or it is principially impossible in PosqtgreSQL
 

psql has variables, but in general we don't support session varibles.
You could create a temp table and put a value in there easily.
   

that's ok, unless i need to access it in a trigger.

No problem to use  a temp table in a trigger (which is local to the 
session), I do so myself. Still, session variables would be nice, making 
coding a little bit more comfortable.

where may i ask a programming and perfomance question?
i think mailing such questions to "BUGS" is not a good idea. 

Just use the pgsql-sql and pgsql-performance mailing lists.

Regards,
Andreas


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [BUGS] Cannot compile CVS current on Mac OS X 10.2.6

2003-09-02 Thread Andreas Pflug
Kenji Sugita wrote:

Attached is compilation error messages of current CVS:

gcc -traditional-cpp -O2 -Wall -Wmissing-prototypes -Wmissing-declarations 
-I../../../src/include   -c -o pqcomm.o pqcomm.c
pqcomm.c: In function `StreamServerPort':
pqcomm.c:280: parse error before '<<' token
pqcomm.c:291: case label not within a switch statement
pqcomm.c:296: case label not within a switch statement
pqcomm.c:300: `default' label not within a switch statement
pqcomm.c:309: parse error before '>>' token
pqcomm.c:208: warning: unused variable `err'
pqcomm.c:209: warning: unused variable `maxconn'
pqcomm.c:210: warning: unused variable `one'
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
 

Looks like a cvs conflict because you changed something in pqcomm.c. 
Remove pqcomm* and cvs update again.

Regards,
Andreas
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [BUGS] session variable

2003-09-02 Thread Andreas Pflug
Bupp Phillips wrote:

Could you possible have some type of  variable (preferably the Transaction
ID) that can identify an individual process?
There's pg_backend_pid() for 7.4 and backend_pid() as contrib module for 
earlier releases.

Regards,
Andreas


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [BUGS] pgadmin 3 bug - crash on start, win95b

2003-09-05 Thread Andreas Pflug
Jim,

first of all, this is the wrong mailing list. Use pgadmin-support for 
pgadmin related questions.

The binary win32 release of pgAdmin3 is unicode enabled, which requires 
Windows NT, 2000 or XP, 95 and 98 are too old.

Regards,
Andreas


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [BUGS] Postgresql 'eats' all mi data partition

2003-09-25 Thread Andreas Pflug
Javier Carlos wrote:


   POSTGRESQL BUG REPORT TEMPLATE

Your name   :   Javier Carlos Rivera
Your email address  :   fjcarlos ( at ) correo ( dot ) insp ( dot ) mx
System Configuration
--
 Architecture (example: Intel Pentium): Intel Pentium 4
 Operating System (example: Linux 2.0.26 ELF) 	  : Debian GNU/Linux 3.0 2.4.21

 RAM		  : 256 MB

 PostgreSQL version (example: PostgreSQL-6.3.2)  : PostgreSQL-7.3.4

 Compiler used (example:  gcc 2.7.2)		  : 2.95.4



Please enter a FULL description of your problem:
-
	On Thursday Bruce Momjian was at Mexico; I saw him and asked about 
this problem. He told me to write to this e-mail.

	When I do a simple 'UPDATE' PostgreSQL 'eats' all my partition space 
of my data directory. For example: 

* My data directory is in /var
* BEFORE I do the UPDATEs I got this from df:
OPORTUNIDADES:~# df
Filesystem   1k-blocks  Used Available Use% Mounted on
/dev/hda2  2885812 14372   2724848   1% /
/dev/hda1 14421344   1195132  12493652   9% /var
/dev/hda3  7692908888560   6413568  13% /usr
/dev/hda6 12491804 22704  11834536   1% /javier
/dev/hda7  1494204 23936   1394364   2% /home
* Then I do the UPDATEs:
 The updates are of this type :
UPDATE tbl_personas SET "PIDFOLIO"=-2 WHERE "PIDFOLIO" IS NULL;
UPDATE tbl_personas SET "P5_1"=-2 WHERE "P5_1" IS NULL;
UPDATE tbl_personas SET "P4PAQ"=-2 WHERE "P4PAQ" IS NULL;
UPDATE tbl_personas SET "P5_4"=-2 WHERE "P5_4" IS NULL;
UPDATE tbl_personas SET "P5_5"=-2 WHERE "P5_5" IS NULL;
UPDATE tbl_personas SET "P36_4"=-2 WHERE "P36_4" IS NULL;
..
UPDATE table_name SET column_name = -2 WHERE column_name IS NULL;
..
  For this example, I run 182 UPDATE queries. It took a long time to 
PostgreSQL to do that updates (more than 30 minutes).
* After that I got this from df:
OPORTUNIDADES:~# df
Filesystem   1k-blocks  Used Available Use% Mounted on
/dev/hda2  2885812 14368   2724852   1% /
/dev/hda1 14421344   7422004   6266780  55% /var
/dev/hda3  7692908888536   6413592  13% /usr
/dev/hda6 12491804 22704  11834536   1% /javier
/dev/hda7  1494204 23960   1394340   2% /home

*** The /var Partition growed up from 9% to 55%, from 1GB to 5.9GB 

*** When I make a 'DROP DATABASE' to the database where that table belongs to, 
mi /var partition returns to its original size (in this example to 9%).

 If I'd continue making UPDATES the size of the database will grow up 
until fill all my partition (from 1GB grows up to 15GB !!!).

This is by design, you need to VACUUM regularly to keep your db healthy 
as pointed out in the documentation.

Regards,
Andreas


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [BUGS] bug reporting

2003-10-03 Thread Andreas Pflug
Alexandr S wrote:

Pgadmin 3.1 don t work (operations like insert rows) with columns  
named in russian language (title of column in russian language). But 
the same operations using PhpPgAdmin - all works very  well, right. 
And if replace russian title of column with equivalent in english - 
all works very well.PgAdmin responds the message: "2003-10-03 
14:15:15 ERROR  : Column not found in pgSet: еще_колонка".  Operation 
System: Windows XP,  distribution binary (zip), language russian.

Hi Alexandr,

you're on the wrong list, please use pgadmin-support for pgAdmin related 
questions.

Which tool did you use to insert the data?

Regards,
Andreas


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [BUGS] [pgadmin-hackers] Degrade(character conversion problem) pga3?

2003-10-08 Thread Andreas Pflug
Hiroshi Saito wrote:

Hi Andreas.

I have a problem.
It is a happening within this week.
See below.
http://cre-ent.skcapi.co.jp/~saito/pgadmin3/20030922_1.jpg
http://cre-ent.skcapi.co.jp/~saito/pgadmin3/20031007_1.jpg
Tonight, let me know it if you know something though it will be examined.
Thank you.
 

Hi Hiroshi,

AFAICS this is a backend problem. I re-posted this on pgsql-bugs too, 
the attached log is your's, reduced to the relevant part. Please check 
the query in the Query Tool, try to modify the column name which seems 
to provoke this problem.
Maybe you found a bug in the EUC_JP <-> UTF-8 conversion.

Regards,
Andreas

2003-10-09 01:25:27 QUERY  : Set query (128.77.50.202:5432): SELECT null_frac AS "Null 
Fraction", avg_width AS "Average Width", n_distinct AS "Distinct Values", 
most_common_vals AS "Most Common Values", most_common_freqs AS "Most Common 
Frequencies", histogram_bounds AS "Histogram Bounds", correlation AS "Correlation"
  FROM pg_stats
 WHERE schemaname = 'public'
   AND tablename = 'stimsdb'
   AND attname = '要素識別番'
2003-10-09 01:25:27 QUERY  : WARNING:  UtfToLocal: could not convert UTF-8 (0xca94). 
Ignored
2003-10-09 01:25:27 QUERY  : WARNING:  UtfToLocal: could not convert UTF-8 (0xd427). 
Ignored
2003-10-09 01:25:27 QUERY  : WARNING:  LocalToUtf: could not convert (0x9776) EUC_JP 
to UTF-8. Ignored
2003-10-09 01:25:27 QUERY  : WARNING:  LocalToUtf: could not convert (0x9166) EUC_JP 
to UTF-8. Ignored
2003-10-09 01:25:27 QUERY  : WARNING:  LocalToUtf: could not convert (0x9522) EUC_JP 
to UTF-8. Ignored
2003-10-09 01:25:27 ERROR  : ERROR:  parser: unterminated quoted string at or near 
"'・ッ at character 351



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [BUGS] BUG #1082: Order by doesn't sort correctly.

2004-02-24 Thread Andreas Pflug
Tom Lane wrote:

Richard Neill <[EMAIL PROTECTED]> writes:
 

This ordering is perverse!
   

No kidding.

 

No matter what the priority is of the 
different characters, I cannot understand how the above can arise.
   

You are assuming that it's a byte-at-a-time process.  It's not.  I
believe the first pass considers only letters and digits.
You can easily prove to yourself that it's not just Postgres.  Here's
an example on my Linux laptop:
[EMAIL PROTECTED] tgl]$ cat zzz
Cymbal #1
Cymbal - 18 inch
Cymbal #2
[EMAIL PROTECTED] tgl]$ LC_ALL=C sort zzz
Cymbal #1
Cymbal #2
Cymbal - 18 inch
[EMAIL PROTECTED] tgl]$ LC_ALL=en_GB sort zzz
Cymbal #1
Cymbal - 18 inch
Cymbal #2
[EMAIL PROTECTED] tgl]$
 

I verified this, and it's not GB specific as one might suggest... Same 
with en_US, de_DE, fr_FR, af_ZA. Does this behaviour really make sense 
to anybody?

Regards,
Andreas


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [BUGS] [7.4.2] Still "variable not found in subplan target lists"

2004-04-16 Thread Andreas Pflug
Tom Lane wrote:

"[EMAIL PROTECTED]" <[EMAIL PROTECTED]> writes:
 

Today after changing (with pgadmin III) the datatype of
qrydorders.docomment from varchar(255) to varchar(2500) i got the
error "variable not found in subplan target lists" [ from a view
dependent on the table ]
   

This is an unsupported operation.  You should perhaps complain to the
pgadmin guys that they are not correctly updating the system catalogs.
 

UPDATE pg_attribute
  SET atttypmod=2504
WHERE attrelid=25574::oid AND attnum=2;
This is what pgAdmin3 will generate to change a varchar to 2500 bytes. 
Please let me know what's wrong with that.

Regards,
Andreas
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [BUGS] [7.4.2] Still "variable not found in subplan target lists"

2004-04-18 Thread Andreas Pflug
Tom Lane wrote:

It doesn't fix views that contain references to the column.  The new
typmod would need to be propagated into the view's rule parsetree, and
perhaps to the type of the view's result column if the view directly
exposes the changed column (whereupon you need to recursively look at
the views that depend on this one, etc).
What you could probably do is find the referencing views via pg_depend.
For each one, try to do CREATE OR REPLACE VIEW using the view definition
string from pg_get_viewdef.  If it succeeds you're done (the variable
must not be propagated to any output column).  If it fails, adjust the
indicated output column's typmod.  Lather, rinse, repeat in case there
is more than one dependent output column.  Recurse once you've
successfully altered the view.
It'd probably also be a smart idea to error out if pg_depend shows any
dependencies on the column from objects that you don't know what to do
with (aren't views).
I recall there was some discussion of this stuff on pgsql-hackers the
last time it was proposed to support "ALTER COLUMN type".  We may have
thought of some additional considerations besides views.  I'd suggest
trawling the list archives to see...
 

There was discussion about altering type, mostly about changing beween 
binary incompatible types (e.g. int4->numeric) requiring adding/dropping 
columns and deep recreation of dependent objects.
There was a thread stating that a limited class of changes exist that 
can be done without deep impact, namely changing between binary 
compatible types and extending the length. This is what pgadmin3 does, 
but apparently this wasn't correct.

Regards,
Andreas
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [BUGS] UNIQUE not honoured for NULL

2004-07-12 Thread Andreas Pflug
Gaetano Mendola wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Bruno Wolff III wrote:
| On Mon, Jul 12, 2004 at 14:47:34 +0200,
|   Gaetano Mendola <[EMAIL PROTECTED]> wrote:
|
|>As someone says NULL = NULL is false
|
|
| As someone else pointed out, NULL = NULL is NULL which is not TRUE.
Yes, that it's better.
Still not precise.
NULL = NULL
is NULL (which is neither TRUE nor FALSE)
while
(NULL = NULL) IS NULL
 is TRUE...
Regards,
Andreas
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [BUGS] Bug in concat operator for Char? -- More Info

2004-07-21 Thread Andreas Pflug
Tom Lane wrote:
Food for thought: in 7.4,
regression=# select ('X   '::char) = ('X'::char);
 ?column?
--
 t
(1 row)
regression=# select ('Y   '::char) = ('Y'::char);
 ?column?
--
 t
(1 row)
regression=# select ('X   '::char || 'Y   '::char) = ('X'::char || 'Y'::char);
 ?column?
--
 t
(1 row)
If we change || as is proposed in this thread, then the last case would
yield 'false', because the first concatenation would yield 'X  Y  '
which is not equal to 'XY' no matter what you think about trailing
spaces.  I find it a bit disturbing that the concatenation of equal
values would yield unequal values.
Well this indicates that the first two examples are questionable. 'X   ' 
is quite-the-same as 'X', but not really-the-same.

CREATE OR REPLACE FUNCTION toms_name() RETURNS char(50)
as $BODY$
DECLARE fullname char(50);
DECLARE firstname char(50) := 'Tom';
DECLARE secondname char(50) := 'G';
DECLARE lastname char(50) := 'Lane';
BEGIN
   fullname := firstname;
   IF secondname != '' THEN
  IF fullname != '' THEN
 fullname := fullname || ' ';
  END IF;
  fullname := fullname || secondname;
   END IF;
   IF fullname != '' THEN
  fullname := fullname || ' ';
   END IF;
   fullname := fullname || lastname;
   RETURN fullname;
END;
$BODY$ LANGUAGE 'plpgsql'
I find the result of this function quite surprising, and certainly not 
yielding what was intended (yes, this can avoided, I know). Surprise is 
getting bigger, if fullname is declared as text...


IMHO the bottom line here is that the SQL-spec behavior of type char(N)
is completely brain-dead.  
Just for COBOL's sake, I suppose.
Regards,
Andreas
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [BUGS] BUG #1199: pgAdmin problem

2004-07-21 Thread Andreas Pflug
PostgreSQL Bugs List wrote:
The following bug has been logged online:
Bug reference:  1199
Logged by:  Marko Zmak
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 7.5 Dev
Operating system:   Windows XP
Description:pgAdmin problem
This list is for pgsql, not for pgadmin. Please use 
[EMAIL PROTECTED]
I've installed PostgeSQL for Windows (using MSys and follwing your 
instructions), and there's a problem when I want to access it with pgAdmin 
III (version 1.0.2). I connect successfully as psql user but when I try to 
go to the Databases list I get the following error message: 

column "datpath" does not exist
pgAdmin III V1.0.2 is for PostgreSQL 7.3 and 7.4 only, use a V1.1 snapshot.
Regards,
Andreas
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [BUGS] broken contrib/dbsize in cvs head?

2004-08-11 Thread Andreas Pflug
Fabien COELHO wrote:
contrib/dbsize> make
gcc -O2 -fno-strict-aliasing -Wall -Wmissing-prototypes
-Wmissing-declarations -fpic -I. -I../../src/include -D_GNU_SOURCE   -c -o
dbsize.o dbsize.c
dbsize.c: In function `database_size':
dbsize.c:58: error: too few arguments to function `GetDatabasePath'
make: *** [dbsize.o] Error 1
It seems that GetDatabasePath was updated, but not all its callers were
fixed.
This is a known issue; there's a patch in pgsql-patches fixing this very 
issue while still not tablespace-aware for tables, and another patch 
proposing new functions which offer tablespace size as well.

Regards,
Andreas
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [BUGS] pgAdmin III edit grid on empty Table

2004-08-13 Thread Andreas Pflug
Nathan Daniels wrote:
I am running the Win32 build installed of the msi installer on a Win2K box.
After installing the database, I created a new database named 'FPMain' and a
new table named 'Trusted' with one column also named 'Trusted' of
varchar[100] NOT NULL.  When I went to edit the table, the column showed up,
but there was no empty row displayed for me to begin entering data.  I
executed an arbitrary SQL statement to add one row of data to it.  The row
correctly showed up in the editor, but there was still no additional empty
row to begin entering new data.
Nathan,
please post pgadmin related stuff at pgadmin-support.
You probably didn't assign a primary key to that table, pgadmin3 will 
refuse to edit tables on which it can't identify rows uniquely.

Regards,
Andreas
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [BUGS] PostgreSQL 8.0

2004-08-13 Thread Andreas Pflug
Andras Kutrovics wrote:
LOG:  select() failed in statistics collector: An operation was 
attempted on something that is not a socket.
What's your operating system platform?
Regards,
Andreas
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [BUGS] postgresql 8 (pgAdminIII) crashes when login form X is

2004-08-15 Thread Andreas Pflug
Peter Eisentraut wrote:
10006226 wrote:
I installed version 8 software using msi pack and created postgres
user and one other.
On opening pgAdmin login form - couldn't overtype User-name postgres
and clicked form close icon (Red X).
MsgBox said "An error has occurred. Couldn't create a connection
dialogue!"

Known and already fixed.
Regards,
Andreas
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [BUGS] 8.0.0 beta 1, contrib/dbsize, GetDatabasePath wrong

2004-08-17 Thread Andreas Pflug
Bruce Momjian wrote:
This has been corrected in current  CVS.
But it still fails for tables in tablespaces. That's why I posted all 
new functions a while ago.

Regards,
Andreas
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [BUGS] 8.0.0 beta 1, contrib/dbsize, GetDatabasePath wrong

2004-08-17 Thread Andreas Pflug
Bruce Momjian wrote:
Andreas Pflug wrote:
Bruce Momjian wrote:
This has been corrected in current  CVS.
But it still fails for tables in tablespaces. That's why I posted all 
new functions a while ago.

Yes, you posted the functions, but I don't understand how to integrate
that into dbsize.
What's the problem? The usage of oids instead of name? The current 
dbsize functions are not easy to integrate in queries as executed from 
admin tools, as
SELECT *, pg_relation_size(oid) FROM pg_class

Regards,
Andreas


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [BUGS] 8.0.0 beta 1, contrib/dbsize, GetDatabasePath wrong

2004-08-17 Thread Andreas Pflug
Bruce Momjian wrote:
Andreas Pflug wrote:
Bruce Momjian wrote:
Andreas Pflug wrote:

Bruce Momjian wrote:

This has been corrected in current  CVS.
But it still fails for tables in tablespaces. That's why I posted all 
new functions a while ago.

Yes, you posted the functions, but I don't understand how to integrate
that into dbsize.
What's the problem? The usage of oids instead of name? The current 
dbsize functions are not easy to integrate in queries as executed from 
admin tools, as
SELECT *, pg_relation_size(oid) FROM pg_class

I basically didn't understand how it fit in with dbsize, and being in
beta, I didn't have time to fiddle with it.
OK what can we do?
I could create something including my functions, and additionally the 
old dbsize function api reusing that code if still needed.

Regards,
Andreas
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [BUGS] BUG #1229: memory leak in backend with COPY and FOREIGN KEY

2004-08-25 Thread Andreas Heiduk
Hello!

After splitting the the job into smaller pieces (e.g. 18x 1Mrow) the backend process 
now seems to release the memory after each subjob. Therefore the trigger queue seems 
to be a good candidate. Until now this queue was unknown to me. 

Perhaps a note in the docu of COPY FROM and in the section "13.4.2 Use COPY FROM" 
within "Performance Tips" would prevent other people like me doing such bad things.

Many thanks for the fast  help.

Andreas Heiduk


Stephan Szabo <[EMAIL PROTECTED]> schrieb am 24.08.04 19:25:56:
> 
> 
> On Tue, 24 Aug 2004, PostgreSQL Bugs List wrote:
> 
> > I'm trying to COPY ~18Mrows into a table which has a foreign key to another
> > table. Memory and swap are exhausted and finaly the postgres.log says:
> 
> This is very possibly the space taken up by the trigger queue (which
> cannot currently spill out to disk). If you load a smaller number of rows
> does the space go up and then down after the copy ends?


___
SMS schreiben mit WEB.DE FreeMail - einfach, schnell und
kostenguenstig. Jetzt gleich testen! http://f.web.de/?mc=021192


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [BUGS] Sugestion

2004-09-02 Thread Andreas Pflug
Julinho wrote:
I would like to sugest a control panel ou icontray interface to postgre 
like service manager in MSSQL (icontray) or the Firebird Server manager 
(control panel).
It would be very nice to control de service!
I never understood what that tray icon should be good for (beyond 
filling up the tray with colourful useless stuff). Usually a server 
process is started when the machine starts, and stopped when switched off.
To control the pgsql service, you can do this in pgAdmin III, in 
addition to the usual services applet.

Regards,
Andreas
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [BUGS] Money type not gone?

2004-09-20 Thread Andreas Pflug
Karel Zak wrote:
On Sun, 2004-09-19 at 15:29 -0400, Tom Lane wrote:

 I want to rewrite it for 8.1 as numeric based datetype with some
formatting extension probably with some internal stuff from to_char()
familly.
How about a type that is also able to hold an ISO currency identifier?
After a long period of work on multi currency applications, I found out 
that adding 1 USD and 1 EUR won't give a good result...

Regards,
Andreas
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [BUGS] Money type not gone?

2004-09-20 Thread Andreas Pflug
Karel Zak wrote:
On Mon, 2004-09-20 at 08:36 +, Andreas Pflug wrote:
Karel Zak wrote:
On Sun, 2004-09-19 at 15:29 -0400, Tom Lane wrote:

I want to rewrite it for 8.1 as numeric based datetype with some
formatting extension probably with some internal stuff from to_char()
familly.
How about a type that is also able to hold an ISO currency identifier?

My idea is special internal API that will usable for new datetypes
programming if type is defined as "numeric + symbol", for example things
like speed, weight, currency.. etc.
A type consisting of value and unit probably makes pgsql even more first 
choice for technical applications. Please note that %, k, M and so on 
are scales, not units and thus dont belong into that type.

Regards,
Andreas
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [BUGS] psql can not connect to the server on Win2000 NT

2004-09-20 Thread Andreas Pflug
Kouber Saparev wrote:
This sounds a lot like FAQ item 3.2 on
http://pginstaller.projects.postgresql.org/FAQ_windows.html. Check those
instructions as a first step.

Maybe you're right that there's something installed on my computer that's
bugging the postmaster. The only networking tool that I have installed is
NetLimiter
This is a good example of a misleading error message, probably resulting 
from examining errno. Using GetLastError() on those socket calls might 
help reducing support requests.

Regards,
Andreas
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [BUGS] Money type not gone?

2004-09-21 Thread Andreas Pflug
Josh Berkus wrote:
The difference with currency would be the lack of a fixed conversion for 
different units.For example, you can:
You're mixing up quite a lot of stuff here.
10m == 1000cm
This is just what I considered forbidden to be included in that "unit-ed 
type" in my previous mail. c = centi is a scale, which is up to a view 
conversion, and should not be stored. It would be a pain to calculate 
with it.


7l == 0.07m^3
l is not a SI unit, m^3 is. See below for further handling
But you can't reasonably:
10USD == 6.25UKL
Yes, right. USD and UKL are different units, and units are generally not 
convertible. If you want one from the other, you always have to multiply 
it by something that adapts the units. 1kg = .001m^3 is never true, it 
needs 1000kg/m^3 as factor for water. Unfortunately, for currencies this 
isn't fixed (it isn't fixed for water either).

l is an abbrevation for .001m^3. If you'd really decide to handle it as 
unit in the "unit-ed type", it wouldn't be convertible either. Since in 
real life l and m^3 are rarely used interchanged on a specific item, 
this seems acceptable.


... because that would require a query to money.yahoo.com to establish.  
It's even more complicated. In practice, real time rates of exchanges 
are much less frequent than point in time rates.

Regards,
Andreas
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [BUGS] [pgadmin-support] Bug: PGAdmin + plpythonu + windows

2004-10-20 Thread Andreas Pflug
Sim Zacks wrote:
PGAdmin 1.2.0 Beta 2
Multi-line functions in plpythonu do not work when created using
pgadmin. The function is created but when it is run for the first time
an error of "cannot compile. syntax error on line 2" is given. The
functions have been tested and work when created with psql.
The problem appears to be do to the windows CRLF as opposed to the
*NIX LF.
Which PostgreSQL version and platform (both client and server)?
From your description, I'd call this a plpythonu bug; I don't have it 
running to verify this, thus I cross posted to pgsql-bugs.

IMHO any pgsql language should handle cr, crlf and lf equally as line 
feed, to avoid client/server eol interpretation discrepancies.

Regards,
Andreas
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [BUGS] 22021: invalid byte sequence for encoding \"UNICODE\":

2004-10-29 Thread Andreas Pflug
Lucas Sultanum wrote:
I am not sure if this is a bug or I am doing something wrong. When I 
execute the following command (*insert into a_cadclias values 
('6542','65465','amaro','ámaro')*)  on pgAdmin III Query it works pretty 
well, but when I try to do the same through a C# App connecting to the 
database through an ODBC driver I get the following error:

*"ERROR: 22021: invalid byte sequence for encoding \"UNICODE\": 0xe16d61"*
Sounds as if you're not using the correct client encoding; your app is 
probably non-unicode, but you're using client encoding Unicode.

Regards,
Andreas
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [BUGS] Template1 is locked when pgAdminIII is open

2005-01-31 Thread Andreas Pflug
Tom Lane wrote:
"Dave Page"  writes:
pgAdmin uses template1 by default as it is the only accessible database
that we can be sure will exist. There are 2 solutions to your problem:

1) Change your pgAdmin connection to use a different default database.

2) Select a different database to use as the template when creating a
new one.

He shouldn't really be getting that error though, as CREATE DATABASE
only complains if there is another connection *besides your own* to
the template database.
I think the real answer is something more along the lines of "don't run
two copies of pgAdmin at once".
Which is still not the whole truth.
pgAdmin may open more connections to the "initial database", e.g. to 
show the server status.

So the advice is "don't use the template1 database for pgadmin 
connections if you're creating databases frequently"

Regards,
Andreas
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [BUGS] Template1 is locked when pgAdminIII is open

2005-01-31 Thread Andreas Pflug
Tom Lane wrote:
"Dave Page"  writes:
Tom Lane wrote:
I think the real answer is something more along the lines of 
"don't run two copies of pgAdmin at once".

He might not be. pgAdmin uses a master connection (normally to
template1) and one connection to each database browsed (minus the master
connection which is reused). However, each SQL Query window opened will
also open a new connection to the currently selected database, which is
what might have happened in this case. Of course, the easy answer is to
close any SQL windows in template1 as well...

Hmm.  Would it be possible to teach pgAdmin to close extra connections
to template1 whenever it's doing CREATE DATABASE?
If those connections were guaranteed to be idle, we wouldn't need them. 
They are independent windows, so we can't kill the conn.

Regards,
Andreas
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[BUGS] pg_dump table ordering bug [8.0.1]

2005-02-07 Thread Andreas Lange
Hi!
Our upgrade from 7.4.6 to 8.0.1 only had one small glitch. Two tables 
got dumped in the wrong order (before their dependecies) and had to get 
their contents added manually after the restore. I've atleast isolated 
the part where things go wrong.

Two files are attached, related as follows (all binaries from 8.0.1):
> psql -f db-9-spec.sql
> pg_dump -n debug database > db-9-dump.sql
[drop schema debug cascade]
> psql -f db-9-dump.sql
psql:db-9-dump.sql:302: ERROR:  new row for relation "form_a_int" 
violates check constraint "form_a_int_qid_check"
CONTEXT:  COPY form_a_int, line 1: "1211095"
psql:db-9-dump.sql:311: ERROR:  new row for relation "form_a_text" 
violates check constraint "form_a_text_qid_check"
CONTEXT:  COPY form_a_text, line 1: "111109foo"

The tables have both check and reference constraints. The errors are 
from check constraints but the reference constraints would have kicked 
in next as the referenced data is below this table in the dump file...

ida=# \d debug.form_a_int
   Table "debug.form_a_int"
 Column   |  Type   | Modifiers
---+-+---
finstance | integer | not null
qid   | integer | not null
uid   | integer | not null
a_int | integer |
Indexes:
   "form_a_int_pkey" primary key, btree (finstance, qid, uid)
Check constraints:
   "form_a_int_check" CHECK (debug.match_q_instance(finstance, qid))
   "form_a_int_qid_check" CHECK (debug.get_atype(qid) = 'INT'::text)
Foreign-key constraints:
   "form_a_int_qid_fkey" FOREIGN KEY (qid) REFERENCES debug.form_q(qid) 
ON UPDATE CASCADE ON DELETE CASCADE
   "form_a_int_finstance_fkey" FOREIGN KEY (finstance) REFERENCES 
debug.form_instance(finstance) ON UPDATE CASCADE ON DELETE CASCADE

And the dump data order is:
-- Data for Name: form_a_int;
-- Data for Name: form_instance;
-- Data for Name: form_q;
 Regards,
Andreas (Not a subscriber of this list)

begin;
--drop schema debug cascade;
create schema debug;


create sequence debug.form_id;

create sequence debug.form_instance_id;

create sequence debug.form_q_id;



create table debug.form_template (
   fid integer primary key default nextval('debug.form_id'),
   name text
);


create table debug.form_instance (
   finstance integer 
  primary key 
  default nextval('debug.form_instance_id'),
   fid integer not null
 references debug.form_template (fid)
 on delete cascade
 on update cascade,
   type text check (type = 'COURSE')
);

create table debug.course_form (
   finstance integer not null
 primary key
 references debug.form_instance (finstance)
 on delete cascade
 on update cascade,
   cid integer not null
-- references debug.courses (cid)
-- on delete cascade
-- on update cascade
);



create table debug.form_qtypes (
  qtype integer primary key,
  atype text not null
 check (atype = 'INT' or atype = 'TEXT')
);


create table debug.form_q (
  qid integer not null primary key 
  default nextval('debug.form_q_id'),
  fid integer not null
  references debug.form_template (fid)
  on delete cascade
  on update cascade,
  qno integer not null
  check (qno >= 0),
  qtype integer not null
  references debug.form_qtypes (qtype)
  on update cascade,
  qtext text
);


create table debug.form_q_alt (
  qid integer not null
  references debug.form_q (qid)
  on delete cascade
  on update cascade,
  altno integer not null
  check (altno >= 0),
  altvalue text not null,
  alttext text,
  primary key (qid, altno)
);


create or replace
   function debug.add_course_form(integer, integer) 
   returns integer as '
DECLARE
   my_fid ALIAS FOR $1;
   my_cid ALIAS FOR $2;
   finst integer;   
BEGIN
   select nextval(''debug.form_instance_id'') into finst;
   insert into debug.form_instance (finstance, fid, type)
values(finst, my_fid, ''COURSE'');
   insert into debug.course_form (finstance, cid)
values (finst, my_cid);
   return finst;
END
' 
language 'plpgsql';



create or replace
   function debug.get_atype(integer) 
   returns text stable as '
DECLARE
   my_qid ALIAS FOR $1;
   my_qtype integer;
BEGIN
   select qtype 
   from debug.form_q 
   where qid=my_qid
   into my_qtype;  
   if my_qtype is null then
  return '''';
   end if;
   return 
  (select atype
   from debug.form_qtypes
   where qtype=my_qtype);
END
' 
language 'plpgsql';


create or replace
   function debug.match_q_instance(integer, integer) 
   returns boolean stable as '
DECLARE
   my_finstance ALIAS FOR $1;
   my_qid ALIAS FOR $2;
   q_fid integer;
   finst_fid integer;
BEGIN
   select fid
   from debug.form_inst

Re: [BUGS] Minor bug in pgAdmin III

2005-02-09 Thread Andreas Pflug
Lucas Galfaso wrote:
Create a trigger using capitalized letters. Whenever you select the trigger, 
the name of the trigger appears without quotes.
Fixed for 1.2.1, thanks for reporting.
Regards,
Andreas
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [BUGS] PGPASSWORD

2005-02-27 Thread Andreas Pflug
Oliver Jowett wrote:
postgresbugs wrote:
The functionality provided by PGPASSWORD should not be removed unless 
there is a functionality other than .pgpass, which is fine for some 
uses and not for others, that will provide similar functionality. That 
could be psql and pg_dump and the like accepting a password on the 
command line as I stated earlier. 

Putting the password on the command line would be even more of a 
security problem than PGPASSWORD is now. I agree that an alternative to 
,pgpass would be useful, but it needs to be a *secure* alternative.
The command line could take a file handle of an inherited pipe.
Regards,
Andreas
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [BUGS] request

2005-05-07 Thread Andreas Pflug
John R Pierce wrote:
I would really like postgres to model the alter syntax of mysql, 
which is really easy to modify table schema especially to include the 
AFTER col_name  part.

A) why does the order of the columns in a particular table matter at all?
If coding without caring (about portability/robustness), unfortunately 
programmers are seduced by MySQL to work like that..

B) is this any sort of SQL standard?
Definitely no.
Do not use SELECT * if you need a specific column ordering, *that* is 
SQL standard.

Regards,
Andreas

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [BUGS] BUG #1707: statistics collector starts with stats_start_collector

2005-06-14 Thread Andreas Pflug

Bruce Momjian wrote:



If we don't have a way to check this, we'll regret it soon enough...
now maybe a GUC setting isn't the optimal way, but I think we need
*some* way besides ps.  ps doesn't work remotely and I think there's
no simple equivalent under Windows either.



Sure, but the GUC only reports that it thinks the stats collector
started, not whether it is running or not.  I think 'ps' is a fine way
to tell.


Um, no such beast under win32. You can only see some postgres processes, 
but can't distinguish them. We'd need some functions that examine the 
pids known to the postmaster (pg_get_collector_pid, pg_get_logger_pid, ...)


Regards,
Andreas

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [BUGS] BUG #1735: row number -1 is out of range 0..-1 error

2005-06-28 Thread Andreas Pflug

"3";"12";"0";"7 days

is the result I'm getting in pgadmin.

Regards,
Andreas

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


  1   2   >