On Dec 19, 2013, at 10:38 AM, Matthieu Monrocq <[email protected]> 
wrote:

> Furthermore, it is relatively easy to build an unbounded channel over a 
> bounded one: just have the producer queue things. Depending on whether 
> sequencing from multiple producers is important or not, this queue can be 
> either shared or producer-local, with relative ease.

This is incorrect. The producer cannot queue messages and preserve the 
appearance of an unbounded channel. Except when sending a message on the 
channel, the producer is busy doing something else. It’s producing. That’s why 
it’s called a producer. This means that if the channel empties out, the 
consumer will run out of things to consume until the producer has finished 
producing another value. At this point, the producer can send as many enqueued 
values as fits in the channel, but it’s too late, the consumer has already 
stalled out.

The only type of producer that can enqueue the messages locally is one that 
produces by selecting on channels, as it can add the sending channel to the mix 
(but even this assumes that its processing of the other channels is fast enough 
to avoid letting the channel go empty).

It’s for this very reason that in Go, the way to produce an infinite channel 
looks like this:

    // run this in its own goroutine
    // replace Type with the proper channel type
    func makeInfiniteChan(in <-chan Type, next chan<- Type) {
        defer close(next)

        // pending events (this is the "infinite" part)
        pending := []Type{}

    recv:
        for {
            // Ensure that pending always has values so the select can
            // multiplex between the receiver and sender properly
            if len(pending) == 0 {
                v, ok := <-in
                if !ok {
                    // in is closed, flush values
                    break
                }

                // We now have something to send
                pending = append(pending, v)
            }

            select {
            // Queue incoming values
            case v, ok := <-in:
                if !ok {
                    // in is closed, flush values
                    break recv
                }
                pending = append(pending, v)

            // Send queued values
            case next <- pending[0]:
                pending = pending[1:]
            }
        }

        // After in is closed, we may still have events to send
        for _, v := range pending {
            next <- v
        }
    }

It’s a bit complicated, and requires a separate goroutine just to do the 
buffering. It works, but it shouldn’t be necessary. And it’s not going to be 
viable in Rust because of 1:1 scheduling.

-Kevin
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to