Hi,
When one request is dispatched to LLD via dm-rq, if the result is
BLK_STS_*RESOURCE, dm-rq will free the request. However, LLD may allocate
private stuff for this request, so this way will cause memory leak.
Add .cleanup_rq() callback and implement it in SCSI for fixing the issue.
And SCSI is
On Wed, Jul 17, 2019 at 08:08:27PM +0200, Ard Biesheuvel wrote:
>
> Since the kernel does not support CTS for XTS any way, and since no
> AF_ALG users can portably rely on this, I agree with Eric that the
> only sensible way to address this is to disable this functionality in
> the driver.
But the
Implement .cleanup_rq() callback for freeing driver private part
of the request. Then we can avoid to leak this part if the request isn't
completed by SCSI, and freed by blk-mq or upper layer(such as dm-rq) finally.
Cc: Ewan D. Milne
Cc: Bart Van Assche
Cc: Hannes Reinecke
Cc: Christoph Hellwig
dm-rq needs to free request which has been dispatched and not completed
by underlying queue. However, the underlying queue may have allocated
private stuff for this request in .queue_rq(), so dm-rq will leak the
request private part.
Add one new callback of .cleanup_rq() to fix the memory leak iss
Hi,
Just a couple of minor nits:
On 7/17/19 5:46 PM, Jaskaran Khurana wrote:
> diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
> index 3834332f4963..c2b04d226c90 100644
> --- a/drivers/md/Kconfig
> +++ b/drivers/md/Kconfig
> @@ -490,6 +490,18 @@ config DM_VERITY
>
> If unsure, say N
This patch set adds in-kernel pkcs7 signature checking for the roothash
of
the dm-verity hash tree.
The verification is to support cases where the roothash is not secured
by
Trusted Boot, UEFI Secureboot or similar technologies.
One of the use cases for this is for dm-verity volumes mounted after
b
The verification is to support cases where the roothash is not secured by
Trusted Boot, UEFI Secureboot or similar technologies.
One of the use cases for this is for dm-verity volumes mounted after boot,
the root hash provided during the creation of the dm-verity volume has to
be secure and thus in
On Wed, 17 Jul 2019 at 19:28, Eric Biggers wrote:
>
> On Wed, Jul 17, 2019 at 05:09:31PM +, Horia Geanta wrote:
> > On 7/17/2019 1:16 AM, Eric Biggers wrote:
> > > Hi Horia,
> > >
> > > On Tue, Jul 16, 2019 at 05:46:29PM +, Horia Geanta wrote:
> > >> Hi,
> > >>
> > >> With fuzz testing ena
On Wed, Jul 17, 2019 at 05:09:31PM +, Horia Geanta wrote:
> On 7/17/2019 1:16 AM, Eric Biggers wrote:
> > Hi Horia,
> >
> > On Tue, Jul 16, 2019 at 05:46:29PM +, Horia Geanta wrote:
> >> Hi,
> >>
> >> With fuzz testing enabled, I am seeing xts(aes) failures on caam drivers.
> >>
> >> Below
On 7/17/2019 1:16 AM, Eric Biggers wrote:
> Hi Horia,
>
> On Tue, Jul 16, 2019 at 05:46:29PM +, Horia Geanta wrote:
>> Hi,
>>
>> With fuzz testing enabled, I am seeing xts(aes) failures on caam drivers.
>>
>> Below are several failures, extracted from different runs:
>>
>> [3.921654] alg:
Hi Nikos,
thanks for elaborating on those details.
Hash table collisions, exception store entry commit overhead,
SSD cache flush issues etc. are all valid points relative to performance
and work set footprints in general.
Do you have any performance numbers for your solution vs.
a snapshot one
Hi,
On 16/07/2019 20:08, Jaskaran Singh Khurana wrote:
>>> Could you please provide feedback on this v6 version.
>>
>> Hi,
>>
>> I am ok with the v6 patch; I think Mike will return to it in 5.4 reviews.
>>
>
> Thanks for the help and also for reviewing this patch. Could you please
> add Reviewed
Currently, kcopyd has a sub-job size of 64KB and a maximum number of 8
sub-jobs. As a result, for any kcopyd job, we have a maximum of 512KB of
I/O in flight.
This upper limit to the amount of in-flight I/O under-utilizes fast
devices and results in decreased throughput, e.g., when writing to a
sn
13 matches
Mail list logo