> David,
> thank you for the tutorial, it is quite enlightening.
> But first of all, did you take a look at my small test program?
Yes. It demonstrates the classic example of mutex abuse. A mutex is not an
appropriate synchronization mechanism when it's going to be held most of the
time and r
On Fri, 16 May 2008, Andriy Gapon wrote:
> that was just an example. Probably a quite bad example.
> I should only limit myself to the program that I sent and I should repeat that
> the result that it produces is not what I would call reasonably expected. And
> I will repeat that I understand that
on 15/05/2008 22:51 Brent Casavant said the following:
On Thu, 15 May 2008, Andriy Gapon wrote:
With current libthr behavior the GUI thread would never have a chance to get
the mutex as worker thread would always be a winner (as my small program
shows).
The example you gave indicates an incor
on 15/05/2008 22:29 David Schwartz said the following:
what if you have an infinite number of items on one side and finite
number on the other, and you want to process them all (in infinite
time, of course). Would you still try to finish everything on one
side (the infinite one) or would you try
But I think that it is not "fair" that at re-lock former
owner gets the lock immediately and the thread that waited on it for
longer time doesn't get a chance.
I believe this is what yield() is for. Before attempting a re-lock you
should call yield() to allow other threads a chance to run.
(
on 15/05/2008 15:57 David Xu said the following:
Andriy Gapon wrote:
Maybe. But that's not what I see with my small example program. One
thread releases and re-acquires a mutex 10 times in a row while the
other doesn't get it a single time.
I think that there is a very slim chance of a blocke
> Brent, David,
>
> thank you for the responses.
> I think I incorrectly formulated my original concern.
> It is not about behavior at mutex unlock but about behavior at mutex
> re-lock. You are right that waking waiters at unlock would hurt
> performance. But I think that it is not "fair" that at
On Thu, 15 May 2008, Andriy Gapon wrote:
> With current libthr behavior the GUI thread would never have a chance to get
> the mutex as worker thread would always be a winner (as my small program
> shows).
The example you gave indicates an incorrect mechanism being used for the
GUI to communicate
> what if you have an infinite number of items on one side and finite
> number on the other, and you want to process them all (in infinite time,
> of course). Would you still try to finish everything on one side (the
> infinite one) or would you try to look at what you have on the other side?
>
On Thu, 15 May 2008, Daniel Eischen wrote:
On Thu, 15 May 2008, Andriy Gapon wrote:
Or even more realistic: there should be a feeder thread that puts things on
the queue, it would never be able to enqueue new items until the queue
becomes empty if worker thread's code looks like the following
On Thu, 15 May 2008, Andriy Gapon wrote:
Or even more realistic: there should be a feeder thread that puts things on
the queue, it would never be able to enqueue new items until the queue
becomes empty if worker thread's code looks like the following:
while(1)
{
pthread_mutex_lock(&work_mutex
Andriy Gapon wrote:
Maybe. But that's not what I see with my small example program. One
thread releases and re-acquires a mutex 10 times in a row while the
other doesn't get it a single time.
I think that there is a very slim chance of a blocked thread
preempting a running thread in this circ
on 15/05/2008 12:05 David Xu said the following:
Andriy Gapon wrote:
Brent, David,
thank you for the responses.
I think I incorrectly formulated my original concern.
It is not about behavior at mutex unlock but about behavior at mutex
re-lock. You are right that waking waiters at unlock would
Andriy Gapon wrote:
Brent, David,
thank you for the responses.
I think I incorrectly formulated my original concern.
It is not about behavior at mutex unlock but about behavior at mutex
re-lock. You are right that waking waiters at unlock would hurt
performance. But I think that it is not "fa
David Schwartz wrote:
Are you out of your mind?! You are specifically asking for the absolute =
worst possible behavior!
If you have fifty tiny things to do on one side of the room and fifty =
tiny things to do on the other side, do you cross the room after each =
one? Of course not. That would
on 15/05/2008 07:22 David Xu said the following:
In fact, libthr is trying to avoid this conveying, if thread #1
hands off the ownership to thread #2, it will cause lots of context
switch, in the idea world, I would let thread #1 to run until it
exhausts its time slice, and at the end of its time
On Wed, 14 May 2008, Andriy Gapon wrote:
> I believe that the behavior I observe is broken because: if thread #1
> releases a mutex and then tries to re-acquire it while thread #2 was
> already blocked waiting on that mutex, then thread #1 should be "queued"
> after thread #2 in mutex waiter's lis
Andriy Gapon wrote:
I am trying the small attached program on FreeBSD 6.3 (amd64,
SCHED_4BSD) and 7-STABLE (i386, SCHED_ULE), both with libthr as threads
library and on both it produces "BROKEN" message.
I compile this program as follows:
cc sched_test.c -o sched_test -pthread
I believe that th
> I am trying the small attached program on FreeBSD 6.3 (amd64,
> SCHED_4BSD) and 7-STABLE (i386, SCHED_ULE), both with libthr as threads
> library and on both it produces "BROKEN" message.
>
> I compile this program as follows:
> cc sched_test.c -o sched_test -pthread
>
> I believe that the beh
on 14/05/2008 18:17 Andriy Gapon said the following:
I am trying the small attached program on FreeBSD 6.3 (amd64,
SCHED_4BSD) and 7-STABLE (i386, SCHED_ULE), both with libthr as threads
library and on both it produces "BROKEN" message.
I compile this program as follows:
cc sched_test.c -o sched
I am trying the small attached program on FreeBSD 6.3 (amd64,
SCHED_4BSD) and 7-STABLE (i386, SCHED_ULE), both with libthr as threads
library and on both it produces "BROKEN" message.
I compile this program as follows:
cc sched_test.c -o sched_test -pthread
I believe that the behavior I observe
21 matches
Mail list logo