Hello! On Tuesday, January 10, 2012, 10:11:04 PM you wrote:
HL> That clearly looks like a bug in the SSI feature, introduced in HL> PostgreSQL 9.1. HL> This looks superficically similar to the bug that Dan Ports spotted on HL> Friday: HL> http://www.mail-archive.com/pgsql-hackers@postgresql.org/msg190135.html. HL> If you can reproduce the issue easily, could you try the patch he posted HL> and see if it fixes it? I had applied the patch suggested and it fixed backend crashes, at least for a while - the server didn't crash for the first half of today. So I think that the patch fixes my situation. Unfortunately, it fixes only half of my problem. Well, it does good and the backend hasn't crashed yet, but the client is still experiencing query problems at some point (not far, I guess, from where its backend would segfault without the patch). This time it encounters the following error from the backend: ERROR: out of shared memory HINT: You might need to increase max_pred_locks_per_transaction. In my first letter I forgot to mention that the client has multiple instances and they query the server at the same time. But what it is interesting that this error has been reported to them more or less simultaneously. To repeat myself, if I set the client's variable "default_transaction_isolation" to "read committed", then the error disappers. So what should I do? Do I need to increase "max_pred_locks_per_transaction" in postgresql.conf? And how can I calculate desired value? With the best regards, Andrew. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs