On 23 December 2011 11:11, Steve Kargl <s...@troutmask.apl.washington.edu> wrote:
> Ah, so goods news! I cannot reproduce this problem that > I saw 3+ years ago on the 4-cpu node, which is currently > running a ULE kernel. When I killed the (N+1)th job, > the N remaining jobs are spread across the N cpus. Ah, good. > One difference between the 2008 tests and today tests is > the number of available cpus. In 2008, I ran the tests > on a node with 8 cpus, while today's test used only a > node with only 4 cpus. If this behavior is a scaling > issue, I can't currently test it. But, today's tests > are certainly encouraging. Do you not have access to anything with 8 CPUs in it? It'd be nice to get clarification that this indeed was fixed. Does ULE care (much) if the nodes are hyperthreading or real cores? Would that play a part in what it tries to schedule/spread? Adrian _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"