Thanks, so far that looks like it is helping.
Only time will tell :)
I take it, that the pg_nofile is the max number of file to open per postgres
session?
Darin
Tom Lane wrote:
> Darin Fisher <[EMAIL PROTECTED]> writes:
> > I am running PosgreSQL 7.1 on Redhat 6.2 Kernel 2.4.6.
> > Under a pretty heavy load:
> > 1000 Transactions per second
> > 32 Open connections
>
> > Everything restarts because of too many open files.
> > I have increase my max number of open files to 16384 but this
> > just delays the inevitable.
>
> > I have tested the same scenario under Solaris 8 and it works
> > fine.
>
> Linux (and BSD) have a tendency to promise more than they can deliver
> about how many files an individual process can open. Look at
> pg_nofile() in src/backend/storage/file/fd.c --- it believes whatever
> sysconf(_SC_OPEN_MAX) tells it, and on these OSes the answer is likely
> to be several thousand. Which the OS can indeed support when *one*
> backend does it, but not when dozens of 'em do it.
>
> I have previously suggested that we should have a configurable upper
> limit for the number-of-openable-files that we will believe --- probably
> a GUC variable with a default value of, say, a couple hundred. No one's
> gotten around to doing it, but if you'd care to submit a patch...
>
> As a quick hack, you could just insert a hardcoded limit in
> pg_nofile().
>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?
http://www.postgresql.org/search.mpl