On 1/19/21 8:56 PM, Dongdong Tao wrote:
> From: dongdong tao
>
> Current way to calculate the writeback rate only considered the
> dirty sectors, this usually works fine when the fragmentation
> is not high, but it will give us unreasonable small rate when
> we are under a situation that very few
From: dongdong tao
Current way to calculate the writeback rate only considered the
dirty sectors, this usually works fine when the fragmentation
is not high, but it will give us unreasonable small rate when
we are under a situation that very few dirty sectors consumed
a lot dirty buckets. In some
Hi Dongdong,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on linus/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url:
Hi Coly,
Apologies for any confusion that I might have caused, and thanks a lot
for your patience and your help !
On Thu, Jan 14, 2021 at 9:31 PM Coly Li wrote:
>
> On 1/14/21 8:22 PM, Dongdong Tao wrote:
> > Hi Coly,
> >
> > Why you limit the iodeph to 8 and iops to 150 on cache device?
> > For
On 1/14/21 8:22 PM, Dongdong Tao wrote:
> Hi Coly,
>
> Why you limit the iodeph to 8 and iops to 150 on cache device?
> For cache device the limitation is small. Iosp 150 with 4KB block size,
> it means every hour writing (150*4*60*60=216KB=) 2GB data. For 35
> hours it is only 70GB.
>
>
> W
Hi Coly,
Why you limit the iodeph to 8 and iops to 150 on cache device?
For cache device the limitation is small. Iosp 150 with 4KB block size,
it means every hour writing (150*4*60*60=216KB=) 2GB data. For 35
hours it is only 70GB.
What if the iodepth is 128 or 64, and no iops rate limitati
On 1/14/21 12:45 PM, Dongdong Tao wrote:
> Hi Coly,
>
> I've got the testing data for multiple threads with larger IO depth.
>
Hi Dongdong,
Thanks for the testing number.
> *Here is the testing steps:
> *1. make-bcache -B <> -C <> --writeback
>
> 2. Open two tabs, start different fio task in
[Share the google doc here to avoid SPAM detection]
Here is the new testing result with multiple threads fio testing:
https://docs.google.com/document/d/1AmbIEa_2MhB9bqhC3rfga9tp7n9YX9PLn0jSUxscVW0/edit?usp=sharing
On Fri, Jan 8, 2021 at 4:47 PM Dongdong Tao wrote:
>
> Yeap, I will scale the t
Yeap, I will scale the testing for multiple threads with larger IO
depth, thanks for the suggestion!
On Fri, Jan 8, 2021 at 4:40 PM Coly Li wrote:
>
> On 1/8/21 4:30 PM, Dongdong Tao wrote:
> > Hi Coly,
> >
> > They are captured with the same time length, the meaning of the
> > timestamp and the
On 1/8/21 4:30 PM, Dongdong Tao wrote:
> Hi Coly,
>
> They are captured with the same time length, the meaning of the
> timestamp and the time unit on the x-axis are different.
> (Sorry, I should have clarified this right after the chart)
>
> For the latency chart:
> The timestamp is the relative
Hi Coly,
They are captured with the same time length, the meaning of the
timestamp and the time unit on the x-axis are different.
(Sorry, I should have clarified this right after the chart)
For the latency chart:
The timestamp is the relative time since the beginning of the
benchmark, so the star
On 1/7/21 10:55 PM, Dongdong Tao wrote:
> Hi Coly,
>
>
> Thanks for the reminder, I understand that the rate is only a hint of
> the throughput, it’s a value to calculate the sleep time between each
> round of keys writeback, the higher the rate, the shorter the sleep
> time, most of the time thi
On 1/5/21 11:44 AM, Dongdong Tao wrote:
> Hey Coly,
>
> This is the second version of the patch, please allow me to explain a
> bit for this patch:
>
> We accelerate the rate in 3 stages with different aggressiveness, the
> first stage starts when dirty buckets percent reach above
> BCH_WRITEBACK
From: dongdong tao
Current way to calculate the writeback rate only considered the
dirty sectors, this usually works fine when the fragmentation
is not high, but it will give us unreasonable small rate when
we are under a situation that very few dirty sectors consumed
a lot dirty buckets. In some
On 12/21/20 12:06 PM, Dongdong Tao wrote:
> Hi Coly,
>
> Thank you so much for your prompt reply!
>
> So, I've performed the same fio testing based on 1TB NVME and 10 TB HDD
> disk as the backing device.
> I've run them both for about 4 hours, since it's 1TB nvme device, it
> will roughly take ab
On 12/14/20 11:30 PM, Dongdong Tao wrote:
> Hi Coly and Dongsheng,
>
> I've get the testing result and confirmed that this testing result is
> reproducible by repeating it many times.
> I ran fio to get the write latency log and parsed the log and then
> generated below latency graphs with some vi
在 2020/12/9 星期三 下午 12:48, Dongdong Tao 写道:
Hi Dongsheng,
I'm working on it, next step I'm gathering some testing data and
upload (very sorry for the delay...)
Thanks for the comment.
One of the main concerns to alleviate this issue with the writeback
process is that we need to minimize the imp
Hi Dongsheng,
I'm working on it, next step I'm gathering some testing data and
upload (very sorry for the delay...)
Thanks for the comment.
One of the main concerns to alleviate this issue with the writeback
process is that we need to minimize the impact on the client IO
performance.
writeback_per
在 2020/11/3 星期二 下午 8:42, Dongdong Tao 写道:
From: dongdong tao
Current way to calculate the writeback rate only considered the
dirty sectors, this usually works fine when the fragmentation
is not high, but it will give us unreasonable small rate when
we are under a situation that very few dirty
On 2020/11/10 12:19, Dongdong Tao wrote:
> [Sorry again for the SPAM detection]
>
> Thank you the reply Coly!
>
> I agree that this patch is not a final solution for fixing the
> fragmentation issue, but more like a workaround to alleviate this
> problem.
> So, part of my intention is to look for
[Sorry again for the SPAM detection]
Thank you the reply Coly!
I agree that this patch is not a final solution for fixing the
fragmentation issue, but more like a workaround to alleviate this
problem.
So, part of my intention is to look for how upstream would like to fix
this issue.
I've looked
On 2020/11/3 20:42, Dongdong Tao wrote:
> From: dongdong tao
>
> Current way to calculate the writeback rate only considered the
> dirty sectors, this usually works fine when the fragmentation
> is not high, but it will give us unreasonable small rate when
> we are under a situation that very few
Hi Dongdong,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on linus/master]
[also build test ERROR on v5.10-rc2 next-20201103]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
From: dongdong tao
Current way to calculate the writeback rate only considered the
dirty sectors, this usually works fine when the fragmentation
is not high, but it will give us unreasonable small rate when
we are under a situation that very few dirty sectors consumed
a lot dirty buckets. In some
24 matches
Mail list logo