On Fri, Mar 11, 2016 at 9:31 AM, Warner Losh <i...@bsdimp.com> wrote:

>
>
>
> And keep in mind the original description was this:
>
> Quote:
>
> Intel NVMe controllers have a slow path for I/Os that span
> a 128KB stripe boundary but ZFS limits ashift, which is derived
> from d_stripesize, to 13 (8KB) so we limit the stripesize
> reported to geom(8) to 4KB.
>
> This may result in a small number of additional I/Os
> to require splitting in nvme(4), however the NVMe I/O
> path is very efficient so these additional I/Os will cause
> very minimal (if any) difference in performance or
> CPU utilisation.
>
> unquote
>
> so the issue seems to being blown up a bit. It's better if you
> don't generate these I/Os, but the driver copes by splitting them
> on the affected drives causing a small inefficiency because you're
> increasing the IOs needed to do the I/O, cutting into the IOPS budget.
>
> Warner
>
>

Warner is correct.  This is something specific to some of the Intel NVMe
controllers.  The core nvme(4) driver detects Intel controllers that
benefit from splitting I/O crossing 128KB stripe boundaries, and will do
the splitting internal to the driver.  Reporting this stripe size further
up the stack is only to reduce the number of I/O that require this
splitting.

In practice, there is no noticeable impact to performance or latency when
splitting I/O on 128KB boundaries.  Larger I/O are more likely to require
splitting, but for larger I/O you will hit overall bandwidth limitations
before getting close to IOPs limitations.

-Jim
_______________________________________________
svn-src-all@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to "svn-src-all-unsubscr...@freebsd.org"

Reply via email to