Thanks Ian,

On Thu, Jun 16, 2016 at 12:13 PM, Ian Lance Taylor <i...@golang.org> wrote:

> On Thu, Jun 16, 2016 at 11:27 AM, Dmitry Orlov
> <dmitry.or...@mixpanel.com> wrote:
> >
> > I am curious how does goroutine scheduler picks what goroutine to run,
> among
> > several runnable. Does it optimize for fairness in any way?
>
> The current scheduler does not optimize for fairness.  Of course, the
> scheduler has changed in the past and it will change in the future.
>
> The current scheduler (approximately) associates goroutines with
> threads.  When a thread has to choose a new goroutine to run, it will
> preferentially choose one of its associated goroutines.  If it doesn't
> have any ready to run, it will steal one from another thread.
>
>
> > I ran a quick experiment and found out that goroutines that run for
> longer
> > intervals between yield points receive proportionally larger CPU share.
>
> Yes, that is what I would have guessed from the current scheduler.
>
>
> > Does this test expose the scheduler's cpu policy correctly, or it is
> biased?
> > What is the best reading about scheduler's policies?
>
> The comment at the top of runtime/proc.go and
> https://golang.org/s/go11sched.
>
>
> It's a more or less understood design goal that the goroutine
> scheduler is optimized for network servers, where each goroutine
> typically does some relatively small amount of work followed by
> network or disk I/O.  The scheduler is not optimized for goroutines
> that do a lengthy CPU-bound computation.  We leave the kernel
> scheduler to handle those, and we expect that programs will set
> GOMAXPROCS to a value larger than the number of lengthy CPU-bound
> computations they expect to run in parallel.  While I'm sure we'd all
> be happy to see the scheduler do a better job of handling CPU-bound
> goroutines, that would only be acceptable if there were no noticeable
> cost to the normal case of I/O-bound goroutines.
>

This is a good model for network servers. What we found about some of our
traffic shapes is that most queries (to one server) have roughly the same
number of I/O-related yields, but the length of CPU runs between those
yields can be widely different.
If I understand it correctly, go scheduler instruments code with additional
yield points not related to I/O. That can help fairness too.


>
> Ian
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to