On Fri, 2006-10-06 at 07:51 +0200, Martin Schreiber wrote:
> On Thursday 05 October 2006 22.41, Joost van der Sluis wrote:
> > > > Now I'm thinking about using an interface, to avoid double code. But I
> > > > don't know what effect that has on run-time performance. I mean, the
> > > > idea was to
Op Tue, 17 Oct 2006, schreef Micha Nelissen:
> Daniël Mantione wrote:
> > no kernel call is necessary. If the lock starts spinning, on uniprocessor
> > it won't be released until the kernel schedules the other thread. This
> > exactly the idea behind kernel futexes, if the lock is not held, no
>
On Tuesday 17 October 2006 10:03, Micha Nelissen wrote:
> Windows events do not have this problem since they are stateful.
To be more precise: Windows signals are persistent, not transient like
Unix signals are.
Vinzent.
___
fpc-devel maillist -
Jonas Maebe wrote:
By implication this means you need a mutex to protect against race
conditions.
Not necessarily, it is at least possible to implement an atomic linked
list without requiring a mutex-style lock.
That's irrelevant. If thread 1 lets thread 2 do something, and thread 2
would s
On Tuesday 17 October 2006 09:46, Jonas Maebe wrote:
> On 17 okt 2006, at 11:22, Vinzent Hoefler wrote:
> >> The pthread_cond_wait() function atomically unlocks the
> >> mutex argument
> >> and waits on the cond argument.
> >>
> >> So the mutex should already be unlocked afterwards.
> >
On 17 okt 2006, at 11:58, Jonas Maebe wrote:
Not necessarily, it is at least possible to implement an atomic
linked list without requiring a mutex-style lock.
... using the atomic primitives of the PPC processor. Maybe it's not
possible in general...
Jonas
__
On 17 okt 2006, at 11:50, Micha Nelissen wrote:
Jonas Maebe wrote:
Not sure if this means it's not necessary in the Mac OS X (and
possibly FreeBSD) versions, or that the Mac OS X man pages are
incomplete.
It does say: The pthread_cond_signal() function unblocks one thread
waiting for th
[ Charset ISO-8859-1 unsupported, converting... ]
> Jonas Maebe wrote:
> > Not sure if this means it's not necessary in the Mac OS X (and possibly
> > FreeBSD) versions, or that the Mac OS X man pages are incomplete.
>
> It does say: The pthread_cond_signal() function unblocks one thread
> waiti
Jonas Maebe wrote:
Not sure if this means it's not necessary in the Mac OS X (and possibly
FreeBSD) versions, or that the Mac OS X man pages are incomplete.
It does say: The pthread_cond_signal() function unblocks one thread
waiting for the condition variable cond.
By implication this means
On 17 okt 2006, at 11:22, Vinzent Hoefler wrote:
The pthread_cond_wait() function atomically unlocks the mutex
argument
and waits on the cond argument.
So the mutex should already be unlocked afterwards.
If you would have read a couple of lines further you also would have
found:
On Tuesday 17 October 2006 09:03, Jonas Maebe wrote:
> On 17 okt 2006, at 10:44, Daniël Mantione wrote:
> > procedure intRTLEventSetEvent(AEvent: PRTLEvent);
> > var p:pintrtlevent;
> >
> > begin
> > p:=pintrtlevent(aevent);
> > pthread_mutex_lock(@p^.mutex);
> > pthread_cond_signal(@p^.condv
Jonas Maebe wrote:
This last pthread_mutex_unlock does not make sense to me. From the
pthread_cond_wait man page:
Read the man page *completely*.
A condition variable must always be associated with a mutex, to
avoid the race condition where a thread prepares to wait on a condition
varia
On 17 okt 2006, at 10:44, Daniël Mantione wrote:
procedure intRTLEventSetEvent(AEvent: PRTLEvent);
var p:pintrtlevent;
begin
p:=pintrtlevent(aevent);
pthread_mutex_lock(@p^.mutex);
pthread_cond_signal(@p^.condvar);
pthread_mutex_unlock(@p^.mutex);
end;
procedure intRTLEventStartWait(A
Daniël Mantione wrote:
no kernel call is necessary. If the lock starts spinning, on uniprocessor
it won't be released until the kernel schedules the other thread. This
exactly the idea behind kernel futexes, if the lock is not held, no kernel
Aren't futexes 2.6+ only ?
Micha
Micha Nelissen wrote:
Marc Weustink wrote:
This was exactly the reason why I choose to use pthreads directly. For
the given situation one single semaphore call could replace this.
Well, events <> semaphores :-).
thats why I wrote: for the given situation ;)
It was for the cheap-concurrency
Op Tue, 17 Oct 2006, schreef Micha Nelissen:
> Marc Weustink wrote:
> > This was exactly the reason why I choose to use pthreads directly. For
> > the given situation one single semaphore call could replace this.
>
> Well, events <> semaphores :-). We need semaphore abstraction as well for
> "p
Marc Weustink wrote:
This was exactly the reason why I choose to use pthreads directly. For
the given situation one single semaphore call could replace this.
Well, events <> semaphores :-). We need semaphore abstraction as well
for "proper" RTL, besides events.
Micha
Daniël Mantione wrote:
Op Tue, 17 Oct 2006, schreef Jonas Maebe:
On 17 okt 2006, at 09:25, Daniël Mantione wrote:
If I compare my implementation of the Chameneos benchmark with the one
from Marc (which uses Pthreads directly), mine is about two times slower.
This is propably caused that our
Op Tue, 17 Oct 2006, schreef Jonas Maebe:
>
> On 17 okt 2006, at 09:25, Daniël Mantione wrote:
>
> > If I compare my implementation of the Chameneos benchmark with the one
> > from Marc (which uses Pthreads directly), mine is about two times slower.
> > This is propably caused that our thread
On 17 okt 2006, at 09:25, Daniël Mantione wrote:
If I compare my implementation of the Chameneos benchmark with the one
from Marc (which uses Pthreads directly), mine is about two times
slower.
This is propably caused that our thread functions often require
multiple
Pthread calls,
Where?
Op Tue, 17 Oct 2006, schreef Jonas Maebe:
>
> On 16 okt 2006, at 22:49, Daniël Mantione wrote:
>
> > In other works, pthreads results
> > in subpar performance,
>
> Is the overhead of a few user level routines really that big? Once the threads
> are setup, they automatically become kernel thr
On 16 okt 2006, at 22:49, Daniël Mantione wrote:
In other works, pthreads results
in subpar performance,
Is the overhead of a few user level routines really that big? Once
the threads are setup, they automatically become kernel threads
anyway. Having a user level layer in between might ev
22 matches
Mail list logo