"Michael Lazzaro" <[EMAIL PROTECTED]> wrote in
>      for <foo()> {...}
>
> It doesn't matter whether foo() is a closure or function returning a
> list, lazy list, or iterator, or is a coroutine returning it's .next
> value.  Which is excellent, and, I'd argue, the whole point; I'm not
> sure that we can have any coroutine syntax that _doesn't_ do that, can
> we?
>
> But, as Luke pointed out, some of the other syntax required to make
> that work is isn't particularly friendly:
...
> If I work backwards, the syntax I'd _want_ for something like that
> would be much like Luke proposed:
...
> ... where the internal pre_traverses are yielding the _original_
> pre_traverse.  Whoa, though, that doesn't really work, because you'd
> have to implicitly do the clone, which screws up the normal iterator
> case!  And I don't immediately know how to have a syntax do the right
> thing in _both_ cases.

I'll try not to repeat myself again...

I think that the problem you're having with the syntax is a result of
focusing on the control-flow aspect, rather than the dataflow. If you
describe it using data, then you get

  sub pre_traverse(@out is rw, %data)
  {
      push @out, %data{ value };
      pre_traverse( @out, %data{ left  } );
      pre_traverse( @out, %data{ right } );
  }

This describes the data correctly, so now lets fix the execution model. I do
this in two steps: first I provide an implementation that works: then I
assume some magic happens...

The implementation that works is to put the recursive function inside a
thread, and then to implement @out using a tie (or, rather, a variable-type)
which has the semantics that constrain the execution so that only the
producer or consumer thread is running at a given time (this requires a pair
of semaphores, plus a bit of cooperation from the sub.

This would implement the ping-pong control flow that is desired, whilst
maintaining the dataflow of the recursive traversal.

The magic bit is to wrap it up in a keyword and assume that, given the
keyword, perl can optimse the threads onto continuations. If perl can't
optimise for some reason, it doesn't matter because the control flow will
still be correct. I'll write my coro using invocant syntax to supply the
fifo (which I will define to be, by default, a magic fifo of the type I just
described).

coro pre_traverse(@out: %data)
{
  @out.push %data{value};
  @out.pre_traverse %data{left};
  @out.pre_traverse %data{right};
}

Finally, because the @out is an invocant (magically created by perl as the
handle to the coroutine)

coro pre_traverse(@out: %data)
{
  .push %data{value};
  .pre_traverse %data{left};
  .pre_traverse %data{right};
}

So we get to a simple end-point: similar to the original but simpler because
I don't worry about describing the control flow, I don't need to worry about
cloning the execution context, yielding, etc. The Fifo (invocant) does all
that for my and, in addition, acts as the handle to the coro. So if I want
to do silly things with multiple, nested, interleaved, coroutines -- then
using different handles will avoid ambiguity.


> (s/coroutine/thread/g for the same rough arguments, e.g. "why should
> the caller care if what they're doing invokes parallelization, so long
> as it does the right thing?")

Indeed: why should be callee care, either?


Dave.


Reply via email to