On Tue, Sep 25, 2012 at 01:58:06PM +0200, Peter Zijlstra wrote: > On Mon, 2012-09-24 at 19:11 -0700, Linus Torvalds wrote: > > In the not-so-distant past, we had the intel "Dunnington" Xeon, which > > was iirc basically three Core 2 duo's bolted together (ie three > > clusters of two cores sharing L2, and a fully shared L3). So that was > > a true multi-core with fairly big shared L2, and it really would be > > sad to not use the second core aggressively. > > Ah indeed. My Core2Quad didn't have an L3 afaik (its sitting around > without a PSU atm so checking gets a little hard) so the LLC level was > the L2 and all worked out right (it also not having SMT helped of > course). > > But if there was a Xeon chip that did add a package L3 then yes, all > this would become more interesting still. We'd need to extend the > scheduler topology a bit as well, I don't think it can currently handle > this well. > > So I guess we get to do some work for steamroller.
Right, but before that we can still do some experimenting on Bulldozer - we have the shared 2M L2 there too and it would be nice to improve select_idle_sibling there. For example, I did some measurements a couple of days ago on Bulldozer of tbench with and without select_idle_sibling: tbench runs single-socket OR-B (box has 8 cores, 4 CUs) (tbench_srv localhost), tbench default settings as in debian testing # clients 1 2 4 8 12 16 3.6-rc6+tip/auto-latest 115.91 238.571 469.606 1865.77 1863.08 1851.46 3.6-rc6+tip/auto-latest-kill select_idle_sibling(): 354.619 534.714 900.069 1969.35 1955.91 1940.84 3.6-rc6+tip/auto-latest ----------------------- Throughput 115.91 MB/sec 1 clients 1 procs max_latency=0.296 ms Throughput 238.571 MB/sec 2 clients 2 procs max_latency=1.296 ms Throughput 469.606 MB/sec 4 clients 4 procs max_latency=0.340 ms Throughput 1865.77 MB/sec 8 clients 8 procs max_latency=3.393 ms Throughput 1863.08 MB/sec 12 clients 12 procs max_latency=0.322 ms Throughput 1851.46 MB/sec 16 clients 16 procs max_latency=2.059 ms 3.6-rc6+tip/auto-latest-kill select_idle_sibling() -------------------------------------------------- Throughput 354.619 MB/sec 1 clients 1 procs max_latency=0.321 ms Throughput 534.714 MB/sec 2 clients 2 procs max_latency=2.651 ms Throughput 900.069 MB/sec 4 clients 4 procs max_latency=10.823 ms Throughput 1969.35 MB/sec 8 clients 8 procs max_latency=1.630 ms Throughput 1955.91 MB/sec 12 clients 12 procs max_latency=3.236 ms Throughput 1940.84 MB/sec 16 clients 16 procs max_latency=0.314 ms So improving this select_idle_sibling thing wouldn't be such a bad thing. Btw, I'll run your patch at http://marc.info/?l=linux-kernel&m=134850571330618 with the same benchmark to see what it brings. Thanks. -- Regards/Gruss, Boris. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/