[BUGS] When dealing with 'bigint'or 'int8', the command 'copy'doesn't work

2001-08-01 Thread Dong, Meng
Title: 



Hi guys,I meet a problem when try to import data from a 
Tab-delimited-text file to a table by command 'copy 
from'. A sample procedure is explained as follow,first, I 
generate a file called  /home/temp/test.txt, 
let's suppose it has only two lines, ( the four figures are delimited by 
tab)123 456789 
999Second, I type the following command under PSQL,create table ttt ( c1 bigint, c2 bigint);Finally, I 
try to import the data from test.txt into table ttt by,copy ttt from '/home/temp/test.txt';But I was 
told,ERROR:  copy: line 1, Bad int8 external 
representation "'123'"
 
(My PostgreSQL is v7.1)
I would appreciate any help or advice.
 
    thanks in 
advance,
 
    Dong, 
Meng[EMAIL PROTECTED]


Re: [BUGS] When dealing with 'bigint'or 'int8', the command 'copy'doesn't work

2001-08-01 Thread Tom Lane

"Dong, Meng" <[EMAIL PROTECTED]> writes:
> SGkgZ3V5cywNCg0KSSBtZWV0IGEgcHJvYmxlbSB3aGVuIHRyeSB0byBpbXBv
> cnQgZGF0YSBmcm9tIGEgVGFiLWRlbGltaXRlZC10ZXh0IGZpbGUgdG8gYSB0
> YWJsZSBieSBjb21tYW5kICdjb3B5IGZyb20nLiBBIHNhbXBsZSBwcm9jZWR1
> cmUgaXMgZXhwbGFpbmVkIGFzIGZvbGxvdywNCg0KDQpmaXJzdCwgSSBnZW5l
> cmF0ZSBhIGZpbGUgY2FsbGVkICAvaG9tZS90ZW1wL3Rlc3QudHh0LCBsZXQn
> [ etc ]

Please don't send HTML mail to the lists.

Although I'm not having any luck quoting your mail, I can tell
you that copying int8 values works fine for me.  If the error
message is spelled exactly as you show it, it suggests that's
what's really in your data file is not

123 456

but

'123'   '456'

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[BUGS] Odpowied¼ automatyczna

2001-08-01 Thread rychu


Witaj !

Wyjecha³em na wakacje - odpowiem na Twój
list po powrocie.

W sprawach zwi±zanych z firm± SKY-NET
proszê pisaæ bezpo¶rednio na [EMAIL PROTECTED]
  
  

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [BUGS] query first time works second time fails for one connection

2001-08-01 Thread Tom Lane

Temp tables don't interact well with plpgsql's attempts to cache query
plans for the queries in its functions.  This is a bug, but a solution
is not close at hand.

In 7.1, you can work around this by using EXECUTE to execute the queries
on the temp table, thus forcing a re-plan on every execution.  Ugly but
serviceable.

Another possibility is to create the temp table only once per session,
outside the function itself.  The function can just DELETE FROM the
table to clear it for re-use on each call.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[BUGS] Too many open files

2001-08-01 Thread Darin Fisher

I am running PosgreSQL 7.1 on Redhat 6.2 Kernel 2.4.6.

Under a pretty heavy load:
1000 Transactions per second
32 Open connections

Everything restarts because of too many open files.
I have increase my max number of open files to 16384 but this
just delays the inevitable.

I have tested the same scenario under Solaris 8 and it works
fine.

Is there anything I can do about this?

Darin

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [BUGS] Too many open files

2001-08-01 Thread Tom Lane

Darin Fisher <[EMAIL PROTECTED]> writes:
> I am running PosgreSQL 7.1 on Redhat 6.2 Kernel 2.4.6.
> Under a pretty heavy load:
>   1000 Transactions per second
>   32 Open connections

> Everything restarts because of too many open files.
> I have increase my max number of open files to 16384 but this
> just delays the inevitable.

> I have tested the same scenario under Solaris 8 and it works
> fine.

Linux (and BSD) have a tendency to promise more than they can deliver
about how many files an individual process can open.  Look at
pg_nofile() in src/backend/storage/file/fd.c --- it believes whatever
sysconf(_SC_OPEN_MAX) tells it, and on these OSes the answer is likely
to be several thousand.  Which the OS can indeed support when *one*
backend does it, but not when dozens of 'em do it.

I have previously suggested that we should have a configurable upper
limit for the number-of-openable-files that we will believe --- probably
a GUC variable with a default value of, say, a couple hundred.  No one's
gotten around to doing it, but if you'd care to submit a patch...

As a quick hack, you could just insert a hardcoded limit in
pg_nofile().

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [BUGS] Too many open files

2001-08-01 Thread Darin Fisher

Thanks, so far that looks like it is helping.
Only time will tell :)
I take it, that the pg_nofile is the max number of file to open per postgres
session?

Darin

Tom Lane wrote:

> Darin Fisher <[EMAIL PROTECTED]> writes:
> > I am running PosgreSQL 7.1 on Redhat 6.2 Kernel 2.4.6.
> > Under a pretty heavy load:
> >   1000 Transactions per second
> >   32 Open connections
>
> > Everything restarts because of too many open files.
> > I have increase my max number of open files to 16384 but this
> > just delays the inevitable.
>
> > I have tested the same scenario under Solaris 8 and it works
> > fine.
>
> Linux (and BSD) have a tendency to promise more than they can deliver
> about how many files an individual process can open.  Look at
> pg_nofile() in src/backend/storage/file/fd.c --- it believes whatever
> sysconf(_SC_OPEN_MAX) tells it, and on these OSes the answer is likely
> to be several thousand.  Which the OS can indeed support when *one*
> backend does it, but not when dozens of 'em do it.
>
> I have previously suggested that we should have a configurable upper
> limit for the number-of-openable-files that we will believe --- probably
> a GUC variable with a default value of, say, a couple hundred.  No one's
> gotten around to doing it, but if you'd care to submit a patch...
>
> As a quick hack, you could just insert a hardcoded limit in
> pg_nofile().
>
> regards, tom lane
>
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [BUGS] Too many open files

2001-08-01 Thread Tom Lane

> I take it, that the pg_nofile is the max number of file to open per postgres
> session?

Right, it's per backend.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[BUGS] query first time works second time fails for one connection

2001-08-01 Thread pgsql-bugs

Rahul Gade ([EMAIL PROTECTED]) reports a bug with a severity of 2
The lower the number the more severe it is.

Short Description
query first time works second time fails for one connection

Long Description
In fact i want to execute the code which i hade placed in remark, but since the 
orderby, limit, desc are not supporting here. 
   I have choosed another option of using temp table for storing result of subquery.
   This procedure is getting invoked throug the trigger before/after (both because i 
have tried it on both event) insert 

  but in the the time of insertion of data through another procedure
 first time it works but for the second time it gives me error as :
   ERROR:  ExecOpenScanR: failed to open relation 28705

Before second time i am taking care of dropping the temp table tr3.
This command fails for second time in a single connection session but works again in 
the next connection session to database.

I am not getting why it is failed in second time, while it's working nice in first 
session.


Hello, Thats not enough, i have found another thing which seems to be a bug. 
  I can't use the command 
   drop table tablename
inside the function 

I am using postgresql 7.0.2 on redhat 7

 Thanks for quick response...

Except these bugs i have facinated with postgresql.It too good than other RDBMS.
 

Sample Code
create function funNotificationLimit() returns opaque
as
'declare serverId int;
begin
serverId=(select NEW.srvserverid);
create temp table tr as select  rptreportid from reporttable
where srvserverid=serverId order by rpttimestamp desc limit 20;
delete from reporttable where srvserverid=serverId and
rptreportid<(select min(rptreportid) from tr);

/* delete from reporttable where srvserverid=serverId and
rptreportid not in(select rptreportid from reporttable
where srvserverid=serverId order by rpttimestamp desc limit 20);*/
return NEW;
end;'
language 'plpgsql';

No file was uploaded with this report


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [BUGS] Too many open files

2001-08-01 Thread Oleg Bartunov

>From my /etc/rc.d/rc.local:

# increase RCVBUF to optimize proxy<->backend
echo 131072 > /proc/sys/net/core/rmem_max
# increase maximum opened files
echo 8192 > /proc/sys/fs/file-max
# increase shared memory
echo "1" > /proc/sys/kernel/shmmax


Regards,

Oleg

On Wed, 1 Aug 2001, Tom Lane wrote:

> Darin Fisher <[EMAIL PROTECTED]> writes:
> > I am running PosgreSQL 7.1 on Redhat 6.2 Kernel 2.4.6.
> > Under a pretty heavy load:
> > 1000 Transactions per second
> > 32 Open connections
>
> > Everything restarts because of too many open files.
> > I have increase my max number of open files to 16384 but this
> > just delays the inevitable.
>
> > I have tested the same scenario under Solaris 8 and it works
> > fine.
>
> Linux (and BSD) have a tendency to promise more than they can deliver
> about how many files an individual process can open.  Look at
> pg_nofile() in src/backend/storage/file/fd.c --- it believes whatever
> sysconf(_SC_OPEN_MAX) tells it, and on these OSes the answer is likely
> to be several thousand.  Which the OS can indeed support when *one*
> backend does it, but not when dozens of 'em do it.
>
> I have previously suggested that we should have a configurable upper
> limit for the number-of-openable-files that we will believe --- probably
> a GUC variable with a default value of, say, a couple hundred.  No one's
> gotten around to doing it, but if you'd care to submit a patch...
>
> As a quick hack, you could just insert a hardcoded limit in
> pg_nofile().
>
>   regards, tom lane
>
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly
>

Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster