On Sun, Feb 03 2008, Nick Piggin wrote: > On Fri, Jul 27, 2007 at 06:21:28PM -0700, Suresh B wrote: > > > > Second experiment which we did was migrating the IO submission to the > > IO completion cpu. Instead of submitting the IO on the same cpu where the > > request arrived, in this experiment the IO submission gets migrated to the > > cpu that is processing IO completions(interrupt). This will minimize the > > access to remote cachelines (that happens in timers, slab, scsi layers). The > > IO submission request is forwarded to the kblockd thread on the cpu > > receiving > > the interrupts. As part of this, we also made kblockd thread on each cpu as > > the > > highest priority thread, so that IO gets submitted as soon as possible on > > the > > interrupt cpu with out any delay. On x86_64 SMP platform with 16 cores, this > > resulted in 2% performance improvement and 3.3% improvement on two node ia64 > > platform. > > > > Quick and dirty prototype patch(not meant for inclusion) for this io > > migration > > experiment is appended to this e-mail. > > > > Observation #1 mentioned above is also applicable to this experiment. CPU's > > processing interrupts will now have to cater IO submission/processing > > load aswell. > > > > Observation #2: This introduces some migration overhead during IO > > submission. > > With the current prototype, every incoming IO request results in an IPI and > > context switch(to kblockd thread) on the interrupt processing cpu. > > This issue needs to be addressed and main challenge to address is > > the efficient mechanism of doing this IO migration(how much batching to do > > and > > when to send the migrate request?), so that we don't delay the IO much and > > at > > the same point, don't cause much overhead during migration. > > Hi guys, > > Just had another way we might do this. Migrate the completions out to > the submitting CPUs rather than migrate submission into the completing > CPU. > > I've got a basic patch that passes some stress testing. It seems fairly > simple to do at the block layer, and the bulk of the patch involves > introducing a scalable smp_call_function for it. > > Now it could be optimised more by looking at batching up IPIs or > optimising the call function path or even mirating the completion event > at a different level... > > However, this is a first cut. It actually seems like it might be taking > slightly more CPU to process block IO (~0.2%)... however, this is on my > dual core system that shares an llc, which means that there are very few > cache benefits to the migration, but non-zero overhead. So on multisocket > systems hopefully it might get to positive territory.
That's pretty funny, I did pretty much the exact same thing last week! The primary difference between yours and mine is that I used a more private interface to signal a softirq raise on another CPU, instead of allocating call data and exposing a generic interface. That put the locking in blk-core instead, turning blk_cpu_done into a structure with a lock and list_head instead of just being a list head, and intercepted at blk_complete_request() time instead of waiting for an already raised softirq on that CPU. Didn't get around to any performance testing yet, though. Will try and clean it up a bit and do that. -- Jens Axboe -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/