On Fri, Mar 08, 2013 at 04:34:28PM -0600, scame...@beardog.cce.hp.com wrote: > I get ~4x the IOPSs with a block driver vs. scsi driver due to contention > for locks in the scsi mid layer (in scsi_request_fn). It's the > difference between the device being worth manufacturing vs. not.
Well that starts to qualify as a good reason I suppose. Of course it also makes you wonder if perhaps some work on optimizing that part of the scsi stack is oin order (I have no idea if that's even plausible). > See this thread: http://marc.info/?l=linux-scsi&m=135518042125008&w=2 > > Driver is similar to nvme (also a new block driver), but this one is > for SCSI over PCIe, basically highly parallelized access to very low > latency devices and trying to use the SCSI midlayer kills the IOPS. Some nifty hardware that's for sure. > There were reasons back then for doing that one as a block driver > which are no longer extant (hence the existence of the hpsa driver > which supplanted cciss for new smart array devices.) > > All other things being equal, I would also prefer a scsi driver. > Heck, it's called SCSI over PCIe -- I tried like hell to get it > to perform adequately as a SCSI driver but all other things are > not equal, not even close, the block driver was ~4x as fast. > > So we reluctantly go with a block driver, just like nvme did. Makes sense. Perhaps that does mean having to teach grub about it then. -- Len Sorensen _______________________________________________ Grub-devel mailing list Grub-devel@gnu.org https://lists.gnu.org/mailman/listinfo/grub-devel