Andrew Alcheyev <bu...@telenet.ru> wrote:
 
> Well, it does good and the backend hasn't crashed yet, but the
> client is still experiencing query problems at some point (not
> far, I guess, from where its backend would segfault without the
> patch). This time it encounters the following error from the
> backend:
> 
> ERROR:  out of shared memory
> HINT:  You might need to increase max_pred_locks_per_transaction.
 
I noticed that you are using prepared transactions.  Do you have any
lingering transactions prepared but not committed or rolled back? 
(You can look in pg_prepared_xacts, and see when they were
prepared.)
 
> So what should I do? Do I need to increase
> "max_pred_locks_per_transaction" in postgresql.conf?
 
Maybe, but let's rule out other problems first.
 
> And how can I calculate desired value?
 
You would need to review pg_locks under load to get a handle on
that.  I don't think anyone has devised any generalized formula yet,
but if we rule out other problems, I'd be happy to review your lock
situation and make suggestions.
 
-Kevin

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Reply via email to