"Chad Attermann" <[EMAIL PROTECTED]> writes:

> Hello all.  Late last year I posted a couple of questions about
> multi-threaded application hangs in Solaris 10 for x86 platforms, and
> about thread-safety of std::basic_string in general.  This was an
> attempt to solve persistent problems I have been experiencing with my
> application hanging due to CPU utilization shooting to 100%, with the
> __gnu_cxx::__exchange_and_add function frequently making appearances
> at the top of the stack trace of several threads.
>
> I believe I have made a break-through recently and wanted to solicit
> the opinion of some experts on this.  I seem to have narrowed the
> problem down to running my application as root versus an unprivileged
> user, and further isolated the suspected cause to varying thread
> priorities in my application. I have theorized that spin-locks in gcc,
> particularly in the atomicity __gnu_cxx::__exchange_and_add function,
> are causing higher priority threads to consume all available cpu
> cycles while spinning indefinitely waiting for a lower priority thread
> that holds the lock.  Now I am already aware that messing with thread
> priorities is dangerous and often an excercise in futility, but I am
> surprised that something so elemental as an atomic test-and-set
> operation that may be used extensively throughout gcc could possibly
> be the culprit for all of the trouble I have been experiencing.

You explicitly mentioned x86.  For x86, __gnu_cxx::__exchange_and_add
does not use a spin-lock.

If you mean that other code may use spin locks built on top of
__exchange_and_add, then, yes, in that case you could be getting a
priority inversion.  But gcc itself does not use any such code.  So if
you are seeing a problem of this sort, it is not a problem with gcc.

Ian

Reply via email to