Re: [GENERAL] 4 billion record limit?

2000-07-27 Thread Mathieu Arnold
Chris Bitmead wrote: > > Any complex scheme to solve this seems like a waste of time. In a couple > of years when you are likely to be running out, you'll probably be > upgrading your computer to a 64bit one with a newer version of postgres, > and then the problem will disappear. that's the k

Re: [GENERAL] 4 billion record limit?

2000-07-27 Thread brad
Mathieu Arnold wrote: > Chris Bitmead wrote: > > > > > Any complex scheme to solve this seems like a waste of time. In a couple > > of years when you are likely to be running out, you'll probably be > > upgrading your computer to a 64bit one with a newer version of postgres, > > and then the prob

RE: [GENERAL] 4 billion record limit?

2000-07-27 Thread Bradley Kieser
True, it is a big number and for most people I would agree with what you are saying. Computationally the amount of activity on the database needed to use up that sort of number is immense. But then, two years ago the prospect of a 1GHz PC processor was also remote. I can only say that OpenSour

Re: [GENERAL] 4 billion record limit?

2000-07-27 Thread Bradley Kieser
My mistake! ;-) I remember wondering who would ever need more that the 16K that the Sinclair Spectrum could give you! Quoting "Prasanth A. Kumar" <[EMAIL PROTECTED]>: > brad <[EMAIL PROTECTED]> writes: > > > > Simply waiting for 64bit numbers is rather inelegant and also presumes > usage > >

Re: [GENERAL] 4 billion record limit?

2000-07-27 Thread Bradley Kieser
Quoting Tom Lane <[EMAIL PROTECTED]>: > Paul Caskey <[EMAIL PROTECTED]> writes: > >> No doubt about it, you're likely to get a few "duplicate key" errors and > >> stuff like that. I'm just observing that it's not likely to be a > >> complete catastrophe, especially not if you don't rely on OIDs

RE: [GENERAL] 4 billion record limit?

2000-07-27 Thread Andrew Snow
> My mistake! ;-) > I remember wondering who would ever need more that the 16K that > the Sinclair Spectrum could give you! To go back to my original point about putting things in perspective - increasing this by 2^32 would give you 68 terabytes of RAM. But if we can get rid of oid's altogether

[GENERAL] Problem

2000-07-27 Thread Merlijn van der Mee
This morning one of my postgres databases wasn't working anymore. I can connect to this database, do a select on a table, but when I do a vacuum, a select on a internal table (pg_...) or just '\d' the backend gives a sigfault. It has always worked, and the other databases running on the same

[GENERAL] Re: psotgresql history function

2000-07-27 Thread Merlijn van der Mee
I had that same problem on Solaris, Irix and HP-unix. It seems that Linux is the only platform that has a nice history in psql. Merlijn Peter Mittermayer wrote: > > Hi, > > I compiled and installed PostgreSQL v7.0.2 on a Linux box where the > history function in psql (cursor up/down) worked w

Re: [GENERAL] pg_dump problem

2000-07-27 Thread Andrew Sullivan
On Thu, Jul 27, 2000 at 12:11:53PM -0400, Antoine Reid wrote: > > In 7.x versions, this appears to be fixed using the '-f' switch: > > > > mymachine:~$ pg_dump -u [database] -f [somefile] > > That is absolutely correct, although we might want to have the 'Username:' > and 'Password:' prompt

Re: [GENERAL]

2000-07-27 Thread Lamar Owen
"Mehta, Ashok" wrote: > > Hi All, > > I am running postgres on RedHat Linux It was running fine and our sysadmin > added a scsi tape device to the kernel and rebooted the machine so the > postmaster was killed with -9 signal and after that when I start postmaster > I get > FATAL: StreamServerPo

[GENERAL] Re: pg_dump problem

2000-07-27 Thread Kyle
Andrew Sullivan wrote: [pg_dump problem] > If you just type the username and password after that, you'll get the > output you want. Problem is that you're redirecting all output to a > file, and that includes the username and password prompts. > > In 7.x versions, this appears to be fixed using

Re: [GENERAL]

2000-07-27 Thread Jim Mercer
On Thu, Jul 27, 2000 at 12:18:23PM -0400, Mehta, Ashok wrote: > I am running postgres on RedHat Linux It was running fine and our sysadmin > added a scsi tape device to the kernel and rebooted the machine so the > postmaster was killed with -9 signal and after that when I start postmaster did the

Re: [GENERAL]

2000-07-27 Thread Tom Lane
"Mehta, Ashok" <[EMAIL PROTECTED]> writes: > I get > FATAL: StreamServerPort: bind() failed: Permission denied > Is another postmaster already running on that port? > If not, remove socket node (/tmp/.s.PGSQL.5432) and retry. > postmaster: cannot create UNIX stream port Hmm. The advice a

Re: [GENERAL] 4 billion record limit?

2000-07-27 Thread Paul Caskey
Tom Lane wrote: > > Paul Caskey <[EMAIL PROTECTED]> writes: > > >> No doubt about it, you're likely to get a few "duplicate key" errors and > >> stuff like that. I'm just observing that it's not likely to be a > >> complete catastrophe, especially not if you don't rely on OIDs to be > >> unique

Re: [GENERAL] Connection problem under extreme load.

2000-07-27 Thread Thomas Lockhart
> We have been doing some load testing with postgresql, and we have been > getting the following error when libpq attempts to connect to the > backend. This only happens occasionally and, as I said under extreme > load (e.g. load average 30+ on a single processor Sun). > connectDBStart() -- conne