Hi Josh,
it's "known" issue, see this thread:
http://archives.postgresql.org/pgsql-hackers/2010-02/thrd6.php#01290
HTH,
Kuba
Dne 8.2.2011 2:39, Josh Berkus napsal(a):
Ooops.
It looks like you are right, see ./src/backend/postmaster/pgstat.c
3c2313f4 (Tom Lane 2008-11-03 01:17:08
s a new paragraph in 8.3 docs
mentioning that TRUNCATE is not MVCC-safe and also the blocking issue.
It's a pity that the warning wasn't there in 7.1 times :-)
Thanks,
Kuba
Tom Lane napsal(a):
Jakub Ouhrabka <[EMAIL PROTECTED]> writes:
Huh. One transaction truncating a dozen t
> Huh. One transaction truncating a dozen tables? That would match the
> sinval trace all right ...
It should be 4 tables - the shown log looks like there were more truncates?
> You might be throwing the baby out with the bathwater,
> performance-wise.
Yes, performance was the initial reason
Hi Tom,
> I can think of three things that might be producing this:
we've found it: TRUNCATE
We'll try to eliminate use of TRUNCATE and the periodical spikes should
go off. There will still be possibility of spikes because of database
creation etc - we'll try to handle this by issuing trivial
24 30036
LOG: sending inval msg -2 0 1663 0 30036 50326
LOG: sending inval msg -2 0 1663 0 30036 50313
LOG: sending inval msg -1 0 30036 0 30622 30036
LOG: sending inval msg -2 0 1663 0 30036 50325
Tom Lane napsal(a):
Jakub Ouhrabka <[EMAIL PROTECTED]> writes:
We'we tried hard to ide
Hi Tom,
> I doubt we'd risk destabilizing 8.3 at this point, for a problem that
> affects so few people; let alone back-patching into 8.2.
understand.
> OK, that confirms the theory that it's sinval-queue contention.
We'we tried hard to identify what's the cause of filling sinval-queue.
We we
> You could check this theory
> out by strace'ing some of the idle backends and seeing if their
> activity spikes are triggered by receipt of SIGUSR1 signals.
Yes, I can confirm that it's triggered by SIGUSR1 signals.
If I understand it correctly we have following choices now:
1) Use only 2 cor
Hi Tom & all,
>> It sounds a bit like momentary contention for a spinlock,
>> but exactly what isn't clear.
> ok, we're going to try oprofile, will let you know...
yes, it seems like contention for spinlock if I'm intepreting oprofile
correctly, around 60% of time during spikes is in s_lock. [
Alvaro,
>>> - do an UNLISTEN if possible
>> Yes, we're issuing unlistens when appropriate.
>
> You are vacuuming pg_listener periodically, yes? Not that this seems
> to have any relationship to your problem, but ...
yes, autovacuum should take care of this. But looking forward for
multiple-wor
if possible
- use another signalisation technique
Regards
Sven
Jakub Ouhrabka schrieb:
Hi all,
we have a PostgreSQL dedicated Linux server with 8 cores (2xX5355). We
came accross a strange issue: when running with all 8 cores enabled
approximatly once a minute (period differs) the system is ver
Hi Tom,
> Interesting. Maybe you could use oprofile to try to see what's
> happening? It sounds a bit like momentary contention for a spinlock,
> but exactly what isn't clear.
ok, we're going to try oprofile, will let you know...
> Perhaps. Have you tried logging executions of NOTIFY to see
Hi all,
we have a PostgreSQL dedicated Linux server with 8 cores (2xX5355). We
came accross a strange issue: when running with all 8 cores enabled
approximatly once a minute (period differs) the system is very busy for
a few seconds (~5-10s) and we don't know why - this issue don't show up
wh
12 matches
Mail list logo