On Thu, 18 Sep 2025, Keith Busch wrote:

> On Thu, Sep 18, 2025 at 09:16:42AM -0700, Keith Busch wrote:
> > +           bio_advance_iter_single(ctx->bio_in, &ctx->iter_in, len);
> > +           bytes -= len;
> > +   } while (bytes);
> > +
> > +   sg_mark_end(sg_in);
> > +   sg_in = dmreq->sg_in[0];
> 
> Err, there should be an '&' in there, "&dmreq->sg_in[0];"
> 
> By the way, I only tested plain64 for the ivmode. That appears to work
> fine, but I am aware this will not be successful with elephant, lmk, or
> tcw. So just an RFC for now to see if it's worth pursuing.

Hi

I'd like to ask - how much does it help performance? How many percent 
faster does your application run?

Another question - what if the user uses preadv or pwritev with direct I/O 
and uses more than 4 sg lists? Will this be rejected in the upper layers, 
or will it reach dm-crypt and return -EINVAL? Note that returning error 
from dm-crypt may be quite problematic, because it would kick the leg out 
of RAID, if there were RAID above dm-crypt. I think that we should return 
BLK_STS_NOTSUPP, because that would be ignored by RAID.

I am considering committing this for the kernel 6.19 (it's too late to add 
it to the 6.18 merge window).

Mikulas


Reply via email to