Hi Jakub,
I think the current semaphore sleep system ought to be improved.
I'm not sure how but since the GSoC deadline is approaching I'll just post the
results without the semaphores.
Instead of sleeping on a per-task basis (for example there are depend waits,
task waits, taskgroup waits etc..)
> I thought we don't want to go lock-free, the queue operations > aren't easily
> implementable lock-free, but instead with a lock for each of > the queues,
Hi,
By lock-free I meant to use locks only for the queues,
But my terminology was indeed confusing sorry about that.
> mean we don't in so
On Sat, Aug 03, 2019 at 06:11:58PM +0900, 김규래 wrote:
> I'm currently having trouble implementing the thread sleeping mechanism when
> the queue is out of tasks.
> Problem is, it's hard to maintain consistency between the thread sleeping
> routine and the queues.
> See the pseudocode below,
>
>
Hi,
I'm currently having trouble implementing the thread sleeping mechanism when
the queue is out of tasks.
Problem is, it's hard to maintain consistency between the thread sleeping
routine and the queues.
See the pseudocode below,
1. check queue is empty
2. go to sleep
if we go lock-free, th
wanted to be sure that's the general case.
Thanks.
Ray Kim
-Original Message-
From: "Jakub Jelinek"
To: "김규래";
Cc: ;
Sent: 2019-07-23 (화) 03:54:13 (GMT+09:00)
Subject: Re: Re: [GSoC'19, libgomp work-stealing] Task parallelism runtime
On Sun, Jul 21, 2019 a
On Sun, Jul 21, 2019 at 04:46:33PM +0900, 김규래 wrote:
> About the snippet below,
>
> if (gomp_barrier_last_thread (state))
> {
> if (team->task_count == 0)
> {
> gomp_team_barrier_done (&team->barrier, state);
> gomp_mutex_unlock (&team->task_lock);
> gomp_team_barrier_wake (&
Hi Jakub,
About the snippet below,
if (gomp_barrier_last_thread (state))
{
if (team->task_count == 0)
{
gomp_team_barrier_done (&team->barrier, state);
gomp_mutex_unlock (&team->task_lock);
gomp_team_barrier_wake (&team->barrier, 0);
return;
}
gomp_team_barrier_set_wa
unsubscribe
On Mon, Jun 24, 2019, at 3:55 PM, 김규래 wrote:
> Hi,
> I'm not very familiar with the gomp plugin system.
> However, looking at 'GOMP_PLUGIN_target_task_completion' seem like
> tasks have to go in and out of the runtime.
> In that case, is it right that the tasks have to know from which
On Tue, Jul 09, 2019 at 09:56:00PM +0900, 김규래 wrote:
> Hi,
> This is an update about my status.
> I've been working on unifying the three queues into a single queue.
> I'm almost finished and passed all the tests except for the dependency
> handling part.
For dependencies, I can imagine taking a
Hi,
This is an update about my status.
I've been working on unifying the three queues into a single queue.
I'm almost finished and passed all the tests except for the dependency handling
part.
Ray Kim
On Tue, Jun 25, 2019 at 04:55:17AM +0900, 김규래 wrote:
> I'm not very familiar with the gomp plugin system.
> However, looking at 'GOMP_PLUGIN_target_task_completion' seem like tasks have
> to go in and out of the runtime.
> In that case, is it right that the tasks have to know from which queue they
Hi,
I'm not very familiar with the gomp plugin system.
However, looking at 'GOMP_PLUGIN_target_task_completion' seem like tasks have
to go in and out of the runtime.
In that case, is it right that the tasks have to know from which queue they
came from?
I think I'll have to add the id of the corre
> Another option, which I guess starts to go out of scope of your gsoc, is
> parallel depth first (PDF) search (Blelloch 1999) as an alternative to work
> stealing. Here's a presentation about some recent work in this area,
> although for Julia and not OpenMP (no idea if PDF would fit with OpenMP a
On Wed, Jun 5, 2019 at 10:42 PM 김규래 wrote:
> > On Wed, Jun 5, 2019 at 9:25 PM 김규래 wrote:
>
> >
> > > Hi, thanks for the detailed explanation.
> > > I think I now get the picture.
> > > Judging from my current understanding, the task-parallelism currently
> works as follows:
> > > 1. Tasks are pl
> On Wed, Jun 5, 2019 at 9:25 PM 김규래 wrote:
>
> > Hi, thanks for the detailed explanation.
> > I think I now get the picture.
> > Judging from my current understanding, the task-parallelism currently works
> > as follows:
> > 1. Tasks are placed in a global shared queue.
> > 2. Workers consume t
> given some special treatment?
>
> Ray Kim
>
> -Original Message-
> From: "Jakub Jelinek"
> To: "김규래";
> Cc: ;
> Sent: 2019-06-04 (화) 03:21:01 (GMT+09:00)
> Subject: Re: [GSoC'19, libgomp work-stealing] Task parallelism runtime
>
On Thu, Jun 06, 2019 at 03:25:24AM +0900, 김규래 wrote:
> Hi, thanks for the detailed explanation.
> I think I now get the picture.
> Judging from my current understanding, the task-parallelism currently works
> as follows:
> 1. Tasks are placed in a global shared queue.
It isn't a global shared qu
correct, I guess the task priority should be given
some special treatment?
Ray Kim
-Original Message-
From: "Jakub Jelinek"
To: "김규래";
Cc: ;
Sent: 2019-06-04 (화) 03:21:01 (GMT+09:00)
Subject: Re: [GSoC'19, libgomp work-stealing] Task parallelism runtime
On T
On Tue, Jun 04, 2019 at 03:01:13AM +0900, 김규래 wrote:
> Hi,
> I've been studying the libgomp task parallelism system.
> I have a few questions.
> First, Tracing the events shows that only the main thread calls GOMP_task.
No, any thread can call GOMP_task, in particular the thread that encountered
t
Hi,
I've been studying the libgomp task parallelism system.
I have a few questions.
First, Tracing the events shows that only the main thread calls GOMP_task.
How do the other worker threads enter the libgomp runtime?
I can't find the entry point of the worker threads from the event tracing and
th
20 matches
Mail list logo