Roger Critchlow wrote:

    I can see that goroutines and channels are appealing programming
    abstractions, but have a hard time believing they could scale.
     Seems like the more goroutines you have the more CPU cycles that
    will be absorbed in switching amongst them.    I could see how
    distributed Erlang would scale with lots of high latency _network_
    messages in flight -- the amount of time for switching would be
    small compared to the latency of the message.   That wouldn't seem
    to be the case with Google Go, which would all be in core.

Right, but is that a Google Go problem or is it our failure to build useful multi-core processors?
It don't think it's a processor design issue so much as a network and memory subsystem design issue. Given:

1) Concurrency = Bandwidth * Latency
2) Latency can only be minimized so far
3) Bandwidth can always be increased by adding wires.

By being limited to SMP type systems, Go is assuming latency is already minimized. But the way you really get a lot of concurrency is by allowing for higher latency communication (e.g. long wires between many processors). Go does not provide a programming model where memory can be accessed across cores. Even if the operating system did that for you, the Go scheduler would only know about spinning threads for pending channels, not for pending memory. To my mind, what would be preferable is to have all memory to be channels (i.e. as Cray XMT implements in hardware). Alternatively, keep a small number of channels (compared to the number memory addresses) but constrain the use of memory to named (typically local) address spaces, i.e. Sequoia or OpenCL.
Marcus

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to