On 4/20/21 1:50 AM, Jens Axboe wrote:
> On 4/19/21 10:26 AM, Coly Li wrote:
>> On 4/19/21 11:40 PM, Randy Dunlap wrote:
>>> On 4/19/21 3:23 AM, Stephen Rothwell wrote:
>>>> Hi all,
>>>>
>>>> Changes since 20210416:
>>>&g
IMM" and "select DAX" to "depends on LIBNVDIMM"
and "depends on DAX" in bcache Kconfig
- Or change "depends on BLK_DEV" to "select BLK_DEV" in nvdimm Kconfig.
I want to ask for a proper way to handle such dependence, and I will
follow the guide for now and in future.
Thanks in advance for the advice.
Coly Li
> }
>
> + err = -ENOMEM;
> ns = kzalloc(sizeof(struct bch_nvm_namespace), GFP_KERNEL);
> if (!ns)
> goto bdput;
>
Copied, added into my queue for rc1.
Thanks.
Coly Li
nced, it is fair to take this patch in.
Could you please post a v3 version which removes the .bss information ?
Coly Li
> here is the statistics:
> Sections: (arm64 platform)
> Idx name size
> -.init.text 0240
> +.init.text 0228
>
> -.ro
On 3/17/21 12:30 PM, Bhaskar Chowdhury wrote:
>
> s/condidate/candidate/
> s/folowing/following/
>
> Signed-off-by: Bhaskar Chowdhury
I will add it in my for-next queue.
Thanks.
Coly Li
> ---
> drivers/md/bcache/journal.c | 4 ++--
> 1 file changed, 2 in
91f32959b..f154c89d1326 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -16,6 +16,7 @@
> #include "features.h"
>
> #include
> +#include
> #include
> #include
> #include [snipped]
For bcache part, Acked-by: Coly Li
Thanks.
Coly Li
want the lookup to try more times
(because GFP_NOIO is set), which is much better then returning -EIO
immediately to caller.
Therefore NOT setting ret to -ENOMEM in the patching location should be
an on-purpose coding, IMHO.
Thanks.
Coly Li
On 3/3/21 4:20 PM, Hannes Reinecke wrote:
> On 3/2/21 5:02 AM, Coly Li wrote:
>> This patch adds the following helper structure and routines into
>> badblocks.h,
>> - struct bad_context
>> This structure is used in improved badblocks code for bad table
>>
badblocks_set(), and just add a new
one named _badblocks_set(). Later patch will remove current existing
badblocks_set() code and make it as a wrapper of _badblocks_set(). So
the new added change won't be mixed with deleted code, the code review
can be easier.
Signed-off-by: Coly Li
---
() and _badblocks_check().
This patch only contains the changes of old code deletion, new added
code for the improved algorithms are in previous patches.
Signed-off-by: Coly Li
---
block/badblocks.c | 310 +-
1 file changed, 3 insertions(+), 307
eview.
Signed-off-by: Coly Li
---
block/badblocks.c | 99 +++
1 file changed, 99 insertions(+)
diff --git a/block/badblocks.c b/block/badblocks.c
index 4db6d1adff42..304b91159a42 100644
--- a/block/badblocks.c
+++ b/block/badblocks.c
@@ -1249,6 +124
ks_set()/badblocks_clear()/badblocks_check().
Signed-off-by: Coly Li
---
block/badblocks.c | 374 ++
1 file changed, 374 insertions(+)
diff --git a/block/badblocks.c b/block/badblocks.c
index d39056630d9c..fd76bbe7b5a2 100644
--- a/block/badblocks.c
+++
ce for any review comment and suggestion.
Coly Li (6):
badblocks: add more helper structure and routines in badblocks.h
badblocks: add helper routines for badblock ranges handling
badblocks: improvement badblocks_set() for multiple ranges handling
badblocks: improve badblocks_clear() fo
e improved badblocks code in following
patches.
Signed-off-by: Coly Li
---
include/linux/badblocks.h | 32
1 file changed, 32 insertions(+)
diff --git a/include/linux/badblocks.h b/include/linux/badblocks.h
index 2426276b9bd3..166161842d1f 100644
--- a/inc
_badblock_clear() for the improvement.
Later patch will delete current code of badblock_clear() and make it as
a wrapper to _badblock_clear(), so the code change can be much clear for
review.
Signed-off-by: Coly Li
---
block/badblocks.c | 319 ++
1 file
On 2/13/21 12:42 AM, David Laight wrote:
> From: Coly Li
>> Sent: 12 February 2021 16:02
>>
>> On 2/12/21 11:31 PM, David Laight wrote:
>>>>> if (c->gc_stats.in_use <= BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID)
>>>>> {
>>
fp_term is 64bit and upgrade the product to 64bit.
The above is just my guess, because I feel compiling should have the
clue for the product upgrade to avoid overflow. But I almost know
nothing about compiler internal
>
> I hope BCH_WRITEBACK_FRAGMENT_THRESHOLD_LOW is zero :-)
Why BCH_WRITEBACK_FRAGMENT_THRESHOLD_LOW being zero can be helpful to
avoid the overflow ? Could you please to provide more detailed information.
I am not challenging you, I just want to extend my knowledge by learning
from you. Thanks in advance.
Coly Li
gh *
> (c->gc_stats.in_use -
> BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH);
> }
> fps = div_s64(dirty, dirty_buckets) * fp_term;
>
Hmm, should such thing be handled by compiler ? Otherwise this kind of
potential overflow issue will be endless time to time.
I am not a compiler expert, should we have to do such explicit type cast
all the time ?
Coly Li
On 1/31/21 2:59 AM, Joe Perches wrote:
> On Mon, 2020-08-24 at 21:56 -0700, Joe Perches wrote:
>> Use semicolons and braces.
>
> ping?
It is in my for-next now, thanks for reminding.
Coly Li
>
>> Signed-off-by: Joe Perches
>> ---
>> drivers/md/bcache/b
bio to nvme will flaged as (REQ_SYNC | REQ_IDLE),
> then fio for writing will get about 1000M/s bandwidth.
>
> Fixes: ad0d9e76a412 ("bcache: use bio op accessors")
> CC: Mike Christie
> Signed-off-by: Dongsheng Yang
The V3 patch is added to my for-next queue. Thanks for
On 1/26/21 12:32 PM, Dongsheng Yang wrote:
>
> 在 2021/1/25 星期一 下午 12:53, Coly Li 写道:
>> On 1/25/21 12:29 PM, Dongsheng Yang wrote:
>>> commit ad0d9e76(bcache: use bio op accessors) makes the bi_opf
>>> modified by bio_set_op_attrs(). But there is a logi
On 1/25/21 2:43 PM, Chanwoo Lee wrote:
> From: ChanWoo Lee
>
> From the 4.19 kernel, thread related code has been removed in queue.c.
> So we can exclude unnecessary header file.
>
> Signed-off-by: ChanWoo Lee
Acked-by: Coly Li
Thanks.
Coly Li
> ---
> drivers/mmc
/request.c
> +++ b/drivers/md/bcache/request.c
> @@ -244,7 +244,7 @@ static void bch_data_insert_start(struct closure *cl)
> trace_bcache_cache_insert(k);
> bch_keylist_push(&op->insert_keys);
>
> - bio_set_op_attrs(n, REQ_OP_WRITE, 0);
> + n->bi_opf |= REQ_OP_WRITE;
> bch_submit_bbio(n, op->c, k, 0);
> } while (n != bio);
The fix is OK to me, I'd like to see opinion from Mike Christie too.
Thanks for the catch.
Coly Li
Lastly, below is the performance data for all the testing result,
> including the data from production env:
> https://docs.google.com/document/d/
> 1AmbIEa_2MhB9bqhC3rfga9tp7n9YX9PLn0jSUxscVW0/edit?usp=sharing
>
> Signed-off-by: dongdong tao
It looks find to me. The above two typ
23,9 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc)
>
> dc->writeback_rate_update_seconds = WRITEBACK_RATE_UPDATE_SECS_DEFAULT;
> dc->writeback_rate_p_term_inverse = 40;
> + dc->writeback_rate_fp_term_low = 1;
> + dc->writeback_rate_fp_term_mid = 10;
> + dc->writeback_rate_fp_term_high = 1000;
Could you please to explain a bit how the above 3 numbers are decided ?
> dc->writeback_rate_i_term_inverse = 1;
>
> WARN_ON(test_and_clear_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags));
> diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
> index 3f1230e22de0..02b2f9df73f6 100644
> --- a/drivers/md/bcache/writeback.h
> +++ b/drivers/md/bcache/writeback.h
> @@ -16,6 +16,10 @@
>
> #define BCH_AUTO_GC_DIRTY_THRESHOLD 50
>
> +#define BCH_WRITEBACK_FRAGMENT_THRESHOLD_LOW 50
> +#define BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID 57
> +#define BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH 64
> +
> #define BCH_DIRTY_INIT_THRD_MAX 64
> /*
> * 14 (16384ths) is chosen here as something that each backing device
>
Thanks in advance.
Coly Li
On 1/14/21 8:22 PM, Dongdong Tao wrote:
> Hi Coly,
>
> Why you limit the iodeph to 8 and iops to 150 on cache device?
> For cache device the limitation is small. Iosp 150 with 4KB block size,
> it means every hour writing (150*4*60*60=216KB=) 2GB data. For 35
> hou
On 1/14/21 12:45 PM, Dongdong Tao wrote:
> Hi Coly,
>
> I've got the testing data for multiple threads with larger IO depth.
>
Hi Dongdong,
Thanks for the testing number.
> *Here is the testing steps:
> *1. make-bcache -B <> -C <> --writeback
>
> 2. O
On 1/8/21 4:30 PM, Dongdong Tao wrote:
> Hi Coly,
>
> They are captured with the same time length, the meaning of the
> timestamp and the time unit on the x-axis are different.
> (Sorry, I should have clarified this right after the chart)
>
> For the latency chart:
&
On 1/7/21 10:55 PM, Dongdong Tao wrote:
> Hi Coly,
>
>
> Thanks for the reminder, I understand that the rate is only a hint of
> the throughput, it’s a value to calculate the sleep time between each
> round of keys writeback, the higher the rate, the shorter the sleep
> t
On 1/5/21 11:44 AM, Dongdong Tao wrote:
> Hey Coly,
>
> This is the second version of the patch, please allow me to explain a
> bit for this patch:
>
> We accelerate the rate in 3 stages with different aggressiveness, the
> first stage starts when dirty bucket
this change makes bcache.ko bigger and I don't
see any benefit.
Coly Li
> ---
> drivers/md/bcache/super.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index 46a00134a36a..963d62a15f
On 12/24/20 9:53 AM, Yi Li wrote:
> There is no need to reassign pdev_set_uuid in the second loop iteration,
> so move it to the place before second loop.
>
> Signed-off-by: Yi Li
Added into my for-next directory. Thanks.
Coly Li
> ---
> drivers/md/bcache/super.c | 2 +-
&g
On 12/23/20 10:12 PM, Zheng Yongjun wrote:
> Signed-off-by: Zheng Yongjun
NACK. The commit log is necessary to explain why it is too late, IMHO I
don't find the implicit reason from the patch.
Coly Li
> ---
> drivers/md/bcache/super.c | 3 +--
> 1 file changed, 1 insertio
On 12/21/20 12:06 PM, Dongdong Tao wrote:
> Hi Coly,
>
> Thank you so much for your prompt reply!
>
> So, I've performed the same fio testing based on 1TB NVME and 10 TB HDD
> disk as the backing device.
> I've run them both for about 4 hours, since it's 1TB
On 12/21/20 11:17 AM, Yi Li wrote:
> Trivial fix to bdput.
>
> Signed-off-by: Yi Li e
Hi Yi,
Indeed these two fixes are not that trivial. I suggest to describe more
detail about why your fixes are necessary and what problems are fixed by
your patches.
Thanks.
Coly Li
> ---
&
On 12/18/20 11:25 AM, Dan Williams wrote:
> [ add Neil, original gooodguy who wrote badblocks ]
>
>
> On Thu, Dec 3, 2020 at 9:16 AM Coly Li wrote:
>>
>> Recently I received a bug report that current badblocks code does not
>> properly handl
On 12/20/20 4:02 AM, antlists wrote:
> On 03/12/2020 17:15, Coly Li wrote:
>> This patch is an initial effort to improve badblocks_set() for setting
>> bad blocks range when it covers multiple already set bad ranges in the
>> bad blocks table, and to do it as fast as possib
On 12/14/20 11:30 PM, Dongdong Tao wrote:
> Hi Coly and Dongsheng,
>
> I've get the testing result and confirmed that this testing result is
> reproducible by repeating it many times.
> I ran fio to get the write latency log and parsed the log and then
> generated below l
On 12/11/20 4:52 PM, Zheng Yongjun wrote:
> Replace a comma between expression statements by a semicolon.
>
> Signed-off-by: Zheng Yongjun
Thanks for the catch. Added in my 2nd wave series.
Coly Li
> ---
> drivers/md/bcache/sysfs.c | 2 +-
> 1 file changed, 1 insert
u in advance for any review or comments on this patch.
Signed-off-by: Coly Li
---
block/badblocks.c | 1041 ++---
include/linux/badblocks.h | 33 ++
2 files changed, 881 insertions(+), 193 deletions(-)
diff --git a/block/badblocks.c b/block/badblocks.
gt;running))
> bd_unlink_disk_holder(dc->bdev, dc->disk.disk);
> bcache_device_free(&dc->disk);
>
Such change is problematic, the writeback rate kworker mush stopped
before writeback and status_update thread, otherwise you may encounter
other problem.
And when I review your patch I find another similar potential problem.
This is tricky, let me think how to fix it
Thank you again, for catch such issue.
Coly Li
t be freed before the
>> writeback_rate_update worker terminates. It is possible that I miss
>> something in the code, but I suggest to test with a kernel after v5.3,
>> and better a v5.8+ kernel.
>>
>> Coly Li
>>
> Thanks.
>
> it is confused that why writeback_rate_update worker run again after
> cancel_delayed_work_sync( kernel log telled).
>
[snipped]
Coly Li
n't be freed before the
writeback_rate_update worker terminates. It is possible that I miss
something in the code, but I suggest to test with a kernel after v5.3,
and better a v5.8+ kernel.
Coly Li
>
> On 11/30/20, Yi Li wrote:
>> bcache_device_detach will release the cache_set a
sy to response.
Thanks.
Coly LI
> IP: [] update_writeback_rate+0x59/0x3a0 [bcache]
> PGD 879620067 PUD 8755d3067 PMD 0
> Oops: [#1] SMP
> CPU: 8 PID: 1005702 Comm: kworker/8:0 Tainted: G 4.4.0+10 #1
> Hardware name: Intel BIOS SE5C610.86B.01.01.0021.032120170601 03/
On 2020/11/10 12:19, Dongdong Tao wrote:
> [Sorry again for the SPAM detection]
>
> Thank you the reply Coly!
>
> I agree that this patch is not a final solution for fixing the
> fragmentation issue, but more like a workaround to alleviate this
> problem.
> So, part of
who tried to fix here. The error
handling code path to release the memory objects are implicit.
Thanks.
Coly Li
> ---
> drivers/md/bcache/super.c | 8 ++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
ood direction should be the moving gc. If the moving gc may work
faster, the situation you mentioned above could be relaxed a lot.
I will NACK this patch unless you may have a observable and reproducible
performance number.
Thanks.
Coly Li
> ---
> drivers/md/bcache/bcach
NACK, we only support the default value. Commit 9aaf51654672 ("bcache:
make cutoff_writeback and cutoff_writeback_sync tunable") clearly
announced the purpose was for research and experiment.
Coly Li
On 2020/10/12 13:28, Ira Weiny wrote:
> On Sat, Oct 10, 2020 at 10:20:34AM +0800, Coly Li wrote:
>> On 2020/10/10 03:50, ira.we...@intel.com wrote:
>>> From: Ira Weiny
>>>
>>> These kmap() calls are localized to a single thread. To avoid the over
>>
d like to be
supportive to option 2) introduce a flag to kmap(), then we won't forget
the new thread-localized kmap method, and people won't ask why a
_thread() function is called but no kthread created.
Thanks.
Coly Li
> Cc: Coly Li (maintainer:BCACHE (BLOCK LAYER CACHE))
> Cc:
patch makes, and I
change it in this shape in v5.10 series. This piece of code may stay in
kernel for 2 or 3 versions at most, the purpose is to make it convenient
for people to test the async registration in production environment.
Once the new async registration behavior is verified to not break any
existing thing (which we don't know) it will be the (only) default
behavior and the CONFIG_BCACHE_ASYNC_REGISTRATION check will be removed.
Thank you all for looking at this.
Coly Li
On 2020/10/3 06:28, David Miller wrote:
> From: Coly Li
> Date: Fri, 2 Oct 2020 16:27:27 +0800
>
>> As Sagi Grimberg suggested, the original fix is refind to a more common
>> inline routine:
>> static inline bool sendpage_ok(struct page *page)
>> {
&
On 2020/10/3 06:28, David Miller wrote:
> From: Coly Li
> Date: Fri, 2 Oct 2020 16:27:27 +0800
>
>> As Sagi Grimberg suggested, the original fix is refind to a more common
>> inline routine:
>> static inline bool sendpage_ok(struct page *page)
>> {
&
On 2020/10/2 20:57, Martin K. Petersen wrote:
>
> Coly,
>
>> diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
>> index 6c022ef0f84d..350d0cc4ee62 100644
>> --- a/drivers/mmc/core/queue.c
>> +++ b/drivers/mmc/core/queue.c
>> @@ -190,7 +190,7
ure all patches are the
latest version.
Sorry for the inconvenience and thank you in advance for taking this set.
Coly Li
e_ok() as a code cleanup.
Signed-off-by: Coly Li
Acked-by: Jeff Layton
Cc: Ilya Dryomov
---
net/ceph/messenger.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index bdfd66ba3843..d4d7a0e52491 100644
--- a/net/ceph/messenger.
Slab objects")
Suggested-by: Eric Dumazet
Signed-off-by: Coly Li
Cc: Vasily Averin
Cc: David S. Miller
Cc: sta...@vger.kernel.org
---
net/ipv4/tcp.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 31f3b858db81..2135ee7c806d 100
" part is to
make sure the page can be sent to network layer's zero copy path. This
part is exactly what sendpage_ok() does.
This patch uses use sendpage_ok() in iscsi_tcp_segment_map() to replace
the original open coded checks.
Signed-off-by: Coly Li
Reviewed-by: Lee Duncan
Acked-
hange existing kernel_sendpage() behavior for the
improper page zero-copy send, it just provides hint warning message for
following potential panic due the kernel memory heap corruption.
Signed-off-by: Coly Li
Cc: Cong Wang
Cc: Christoph Hellwig
Cc: David S. Miller
Cc: Sridhar Samudrala
---
e use sock_no_sendpage to handle this page.
Signed-off-by: Coly Li
Cc: Chaitanya Kulkarni
Cc: Christoph Hellwig
Cc: Hannes Reinecke
Cc: Jan Kara
Cc: Jens Axboe
Cc: Mikhail Skorzhinskii
Cc: Philipp Reisner
Cc: Sagi Grimberg
Cc: Vlastimil Babka
Cc: sta...@vger.kernel.org
---
drivers/nvme
want to send page to remote end by kernel_sendpage()
may use this helper to check whether the page is OK. If the helper does
not return true, the driver should try other non sendpage method (e.g.
sock_no_sendpage()) to handle the page.
Signed-off-by: Coly Li
Cc: Chaitanya Kulkarni
Cc: Chri
at macro sendpage_ok() does, which is
introduced into include/linux/net.h to solve a similar send page issue
in nvme-tcp code.
This patch uses macro sendpage_ok() to replace the open coded checks to
page type and refcount in _drbd_send_page(), as a code cleanup.
Signed-off-by: Coly Li
Cc: Philipp Rei
ge().
- The 3rd patch fixes the page checking issue in nvme-over-tcp driver.
- The 4th patch adds page_count check by using sendpage_ok() in
do_tcp_sendpages() as Eric Dumazet suggested.
- The 5th and 6th patches just replace existing open coded checks with
the inline sendpage_ok() routine.
et host controllers specify maximum discard
timeout")
Fixes: b35fd7422c2f ("block: check queue's limits.discard_granularity in
__blkdev_issue_discard()").
Reported-and-tested-by: Vicente Bergas
Signed-off-by: Coly Li
Acked-by: Adrian Hunter
Cc: Ulf Hansson
---
Changelog,
v4, up
On 2020/10/2 02:47, Vicente Bergas wrote:
> On Thursday, October 1, 2020 9:18:24 AM CEST, Coly Li wrote:
>> In mmc_queue_setup_discard() the mmc driver queue's discard_granularity
>> might be set as 0 (when card->pref_erase > max_discard) while the mmc
>> device s
On 2020/10/1 17:27, Vicente Bergas wrote:
> On Thu, Oct 1, 2020 at 11:07 AM Adrian Hunter wrote:
>>
>> On 1/10/20 11:38 am, Vicente Bergas wrote:
>>> On Thu, Oct 1, 2020 at 9:18 AM Coly Li wrote:
>>>>
>>>> In mmc_queue_setup_discard() the mmc driv
On 2020/10/1 16:38, Vicente Bergas wrote:
> On Thu, Oct 1, 2020 at 9:18 AM Coly Li wrote:
>>
>> In mmc_queue_setup_discard() the mmc driver queue's discard_granularity
>> might be set as 0 (when card->pref_erase > max_discard) while the mmc
>> device still
" part is to
make sure the page can be sent to network layer's zero copy path. This
part is exactly what sendpage_ok() does.
This patch uses use sendpage_ok() in iscsi_tcp_segment_map() to replace
the original open coded checks.
Signed-off-by: Coly Li
Acked-by: Martin K. Peterse
e use sock_no_sendpage to handle this page.
Signed-off-by: Coly Li
Cc: Chaitanya Kulkarni
Cc: Christoph Hellwig
Cc: Hannes Reinecke
Cc: Jan Kara
Cc: Jens Axboe
Cc: Mikhail Skorzhinskii
Cc: Philipp Reisner
Cc: Sagi Grimberg
Cc: Vlastimil Babka
Cc: sta...@vger.kernel.org
---
drivers/nvme
e_ok() as a code cleanup.
Signed-off-by: Coly Li
Acked-by: Jeff Layton
Cc: Ilya Dryomov
---
net/ceph/messenger.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index bdfd66ba3843..d4d7a0e52491 100644
--- a/net/ceph/messenger.
Slab objects")
Suggested-by: Eric Dumazet
Signed-off-by: Coly Li
Cc: Vasily Averin
Cc: David S. Miller
Cc: sta...@vger.kernel.org
---
net/ipv4/tcp.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 31f3b858db81..2135ee7c806d 100
at macro sendpage_ok() does, which is
introduced into include/linux/net.h to solve a similar send page issue
in nvme-tcp code.
This patch uses macro sendpage_ok() to replace the open coded checks to
page type and refcount in _drbd_send_page(), as a code cleanup.
Signed-off-by: Coly Li
Cc: Philipp Rei
want to send page to remote end by kernel_sendpage()
may use this helper to check whether the page is OK. If the helper does
not return true, the driver should try other non sendpage method (e.g.
sock_no_sendpage()) to handle the page.
Signed-off-by: Coly Li
Cc: Chaitanya Kulkarni
Cc: Chri
hange existing kernel_sendpage() behavior for the
improper page zero-copy send, it just provides hint warning message for
following potential panic due the kernel memory heap corruption.
Signed-off-by: Coly Li
Cc: Cong Wang
Cc: Christoph Hellwig
Cc: David S. Miller
Cc: Sridhar Samudrala
---
-tcp driver.
- The 4th patch adds page_count check by using sendpage_ok() in
do_tcp_sendpages() as Eric Dumazet suggested.
- The 5th and 6th patches just replace existing open coded checks with
the inline sendpage_ok() routine.
Coly Li
Cc: Chaitanya Kulkarni
Cc: Chris Leech
Cc: Christoph Hellwi
et host controllers specify maximum discard
timeout")
Fixes: b35fd7422c2f ("block: check queue's limits.discard_granularity in
__blkdev_issue_discard()").
Reported-by: Vicente Bergas
Signed-off-by: Coly Li
Acked-by: Adrian Hunter
Cc: Ulf Hansson
---
Changelog,
v3, add Fixes
On 2020/10/1 15:13, Adrian Hunter wrote:
> On 1/10/20 9:59 am, Coly Li wrote:
>> In mmc_queue_setup_discard() the mmc driver queue's discard_granularity
>> might be set as 0 (when card->pref_erase > max_discard) while the mmc
>> device still declares to support disc
t;block: check queue's limits.discard_granularity in
__blkdev_issue_discard()").
Reported-by: Vicente Bergas
Signed-off-by: Coly Li
Cc: Adrian Hunter
Cc: Ulf Hansson
---
Changelog,
v2, change commit id of the Fixes tag.
v1, initial version.
drivers/mmc/core/queue.c | 2 +-
1 file changed, 1 insert
On 2020/10/1 14:14, Adrian Hunter wrote:
> On 1/10/20 7:36 am, Coly Li wrote:
>> On 2020/10/1 01:23, Adrian Hunter wrote:
>>> On 30/09/20 7:08 pm, Coly Li wrote:
>>>> In mmc_queue_setup_discard() the mmc driver queue's discard_granularity
>>>> might b
On 2020/10/1 01:23, Adrian Hunter wrote:
> On 30/09/20 7:08 pm, Coly Li wrote:
>> In mmc_queue_setup_discard() the mmc driver queue's discard_granularity
>> might be set as 0 (when card->pref_erase > max_discard) while the mmc
>> device still declares to support
: let host controllers specify maximum
discard timeout")
Reported-by: Vicente Bergas
Signed-off-by: Coly Li
Cc: Adrian Hunter
Cc: Ulf Hansson
---
drivers/mmc/core/queue.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core
On 2020/9/25 23:18, Greg KH wrote:
> On Fri, Sep 25, 2020 at 11:01:13PM +0800, Coly Li wrote:
>> The original problem was from nvme-over-tcp code, who mistakenly uses
>> kernel_sendpage() to send pages allocated by __get_free_pages() without
>> __GFP_COMP flag. Such page
want to send page to remote end by kernel_sendpage()
may use this helper to check whether the page is OK. If the helper does
not return true, the driver should try other non sendpage method (e.g.
sock_no_sendpage()) to handle the page.
Signed-off-by: Coly Li
Cc: Chaitanya Kulkarni
Cc: Chri
e_ok() as a code cleanup.
Signed-off-by: Coly Li
Cc: Ilya Dryomov
Cc: Jeff Layton
---
net/ceph/messenger.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index bdfd66ba3843..d4d7a0e52491 100644
--- a/net/ceph/messenger.c
+++ b/net/
at macro sendpage_ok() does, which is
introduced into include/linux/net.h to solve a similar send page issue
in nvme-tcp code.
This patch uses macro sendpage_ok() to replace the open coded checks to
page type and refcount in _drbd_send_page(), as a code cleanup.
Signed-off-by: Coly Li
Cc: Philipp Rei
hange existing kernel_sendpage() behavior for the
improper page zero-copy send, it just provides hint warning message for
following potential panic due the kernel memory heap corruption.
Signed-off-by: Coly Li
Cc: Cong Wang
Cc: Christoph Hellwig
Cc: David S. Miller
Cc: Sridhar Samudrala
---
" part is to
make sure the page can be sent to network layer's zero copy path. This
part is exactly what sendpage_ok() does.
This patch uses use sendpage_ok() in iscsi_tcp_segment_map() to replace
the original open coded checks.
Signed-off-by: Coly Li
Cc: Vasily Averin
Cc: Cong Wang
e use sock_no_sendpage to handle this page.
Signed-off-by: Coly Li
Cc: Chaitanya Kulkarni
Cc: Christoph Hellwig
Cc: Hannes Reinecke
Cc: Jan Kara
Cc: Jens Axboe
Cc: Mikhail Skorzhinskii
Cc: Philipp Reisner
Cc: Sagi Grimberg
Cc: Vlastimil Babka
Cc: sta...@vger.kernel.org
---
drivers/nvme
Slab objects")
Suggested-by: Eric Dumazet
Signed-off-by: Coly Li
Cc: Vasily Averin
Cc: David S. Miller
Cc: sta...@vger.kernel.org
---
net/ipv4/tcp.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 31f3b858db81..2135ee7c806d 100
patch adds page_count check by using sendpage_ok() in
do_tcp_sendpages() as Eric Dumazet suggested.
- The 5th and 6th patches just replace existing open coded checks with
the inline sendpage_ok() routine.
Coly Li
Cc: Chaitanya Kulkarni
Cc: Chris Leech
Cc: Christoph Hellwig
Cc: Cong Wang
Cc: Davi
On 2020/9/23 16:43, Christoph Hellwig wrote:
> On Wed, Aug 19, 2020 at 12:22:05PM +0800, Coly Li wrote:
>> On 2020/8/19 03:49, Christoph Hellwig wrote:
>>> On Wed, Aug 19, 2020 at 12:33:37AM +0800, Coly Li wrote:
>>>> On 2020/8/19 00:24, Christoph Hellwig wrote:
ges there as well by applying the same scheme based on
> max_sectors.
>
> Signed-off-by: Christoph Hellwig
> Reviewed-by: Johannes Thumshirn
For the bcache part,
Acked-by: Coly Li
Thanks.
Coly Li
> ---
> block/blk-settings.c | 5 ++---
> block/blk-sysfs.c
On 2020/9/21 16:07, Christoph Hellwig wrote:
> Inherit the optimal I/O size setting just like the readahead window,
> as any reason to do larger I/O does not apply to just readahead.
>
> Signed-off-by: Christoph Hellwig
Acked-by: Coly Li
Thanks.
Coly Li
> ---
> drivers
On 2020/9/21 22:00, Christoph Hellwig wrote:
> On Mon, Sep 21, 2020 at 05:54:59PM +0800, Coly Li wrote:
>> I am not sure whether virtual bcache device's optimal request size can
>> be simply set like this.
>>
>> Most of time inherit backing device's optimal r
the optimal request size of the virtual
bcache device as the least common multiple of cache device's and backing
device's optimal request sizes ?
[snipped]
Thanks.
Coly Li
o
> ---
> v2: based on linux-next(20200917), and can be applied to
> mainline cleanly now.
Added into my test queue. Thanks.
Coly Li
>
> drivers/md/bcache/closure.c | 16 +++-
> 1 file changed, 3 insertions(+), 13 deletions(-)
>
> diff --git a/driv
and Christoph,
It has been quiet for a while, what should we go next for the
kernel_sendpage() related issue ?
Will Christoph's or my series be considered as proper fix, or maybe I
should wait for some other better idea to show up? Any is OK for me,
once the problem is fixed.
Thanks in advance.
Coly Li
Linux kernel v5.8
and tpm2-tools-4.1, people can create a trusted key by following the
examples in this document.
Signed-off-by: Coly Li
Reviewed-by: Jarkko Sakkinen
Reviewed-by: Stefan Berger
Cc: Dan Williams
Cc: James Bottomley
Cc: Jason Gunthorpe
Cc: Jonathan Corbet
Cc: Mimi Zohar
Cc
On 2020/8/20 05:04, Jarkko Sakkinen wrote:
> On Thu, Aug 20, 2020 at 12:02:38AM +0300, Jarkko Sakkinen wrote:
>> On Wed, Aug 19, 2020 at 01:00:02AM +0800, Coly Li wrote:
>>> The parameters in command examples for tpm2_createprimary and
>>> tpm2_evictcontrol are outdate
On 2020/8/21 14:48, Leizhen (ThunderTown) wrote:
>
>
> On 8/21/2020 12:11 PM, Coly Li wrote:
>> On 2020/8/21 10:03, Zhen Lei wrote:
>>> There are too many PAGE_SECTORS definitions, and all of them are the
>>> same. It looks a bit of a mess. So why not move it
1 - 100 of 288 matches
Mail list logo