* Heikki Linnakangas (hlinnakan...@vmware.com) wrote: > My theory is that after that point all the cores are busy, > and processes start to be sometimes context switched while holding > the spinlock, which kills performance. Has anyone else seen that > pattern?
Isn't this the same issue which has prompted multiple people to propose (sometimes with code, as I recall) to rip out our internal spinlock system and replace it with kernel-backed calls which do it better, specifically by dealing with issues like the above? Have you seen those threads in the past? Any thoughts about moving in that direction? > Curiously, I don't see that when connecting pgbench via TCP > over localhost, only when connecting via unix domain sockets. > Overall performance is higher over unix domain sockets, so I guess > the TCP layer adds some overhead, hurting performance, and also > affects scheduling somehow, making the steep drop go away. I wonder if the kernel locks around unix domain sockets are helping us out here, while it's not able to take advantage of such knowledge about the process that's waiting when it's a TCP connection? Just a hunch. Thanks, Stephen
signature.asc
Description: Digital signature