Hi Stan, Am Montag, 27. August 2012 schrieb Stan Hoeppner: > On 8/27/2012 8:27 AM, Stefan Monnier wrote: > >> I run an SSD on my MCP61P so lack of NCQ has no impact > >> whatsoever--SSD's have no moving parts, and all seeks > >> are instantaneous. > > > > While I haven't heard of NCQ improving read speed of SSDs, it can > > have a significant positive impact on write speed for SSDs. > > Some SSD controllers, such as the later Sanforce models, do benefit > from NCQ with some server oriented workloads, little to none with > others. Controllers such as the Indilinx, Jmicron, and Samsung don't > benefit from NCQ at all, with any workload. > > For a desktop user workload, there will be no noticeable performance > difference, because such applications don't do parallel IO.
Are you sure about 1) desktop applications I/O behavior and 2) NCQ? 1) I see noticeable difference for my Intel SSD 320 with different iodepth values in fio job. And as to my current understanding more than one single-threaded I/O generating desktop application can easily run at some given point of time. And then Nepomuk desktop search accesses the SSD disk with up to 10 threads at times excluding the Virtuoso database server with its 5,4 GiB database – no kidding, thats for real. Then there is Akonadi with the PostgreSQL database, KMail, Iceweasel with cache and sqlite3 database and whatnot. As sync I/O calls are synchronous on the syscall side but the kernel is still free to schedule the requests in bigger batches on the lower levels of the I/O stuck, I´d expect that desktop workloads can cause similar effects as somewhat higher I/O depths as one. Indeed running fio with numjobs=64 instead of iodepth=64 gives similar results with that Intel SSD except for higher context switch rate and CPU usage. (I can dig out those results if wanted.) 2) I am not sure about NCQ tough. I´d never disabled it on my tests. The Intel SSD 320 reports a queue depth of 32 with hdparm -I. It may not make much of a difference, I don´t know. Your seek time argument makes sense to me. But then I thought that the SSD firmware may have bigger chances to combine requests into erase block sized units when it gets more data to deal with at once. So it may make some difference for writes. But I could be completely off track with my assumption and I bet only a test will show. SSD firmwares are like big black interesting and fascinating boxes to me ;) Thanks, -- Martin 'Helios' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/201208282101.45190.mar...@lichtvoll.de