Signed-off-by: Ming Lei
---
Another approach is to do the check after BLK_STS_RESOURCE is returned
from .queue_rq() and BLK_MQ_S_SCHED_RESTART is set, that way may introduce
a bit cost in hot path, and it was V1 of this patch actually, please see
that in the following link:
https://github.
blk-mq IO merging via
blk_insert_cloned_request feedback")
Reported-by: Laurence Oberman
Reviewed-by: Mike Snitzer
Signed-off-by: Ming Lei
---
block/blk-mq.c | 22 ++
1 file changed, 10 insertions(+), 12 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4d4
On Tue, Jan 09, 2018 at 12:09:27AM +0300, Dmitry Osipenko wrote:
> On 18.12.2017 15:22, Ming Lei wrote:
> > When merging one bvec into segment, if the bvec is too big
> > to merge, current policy is to move the whole bvec into another
> > new segment.
> >
> > This
On Tue, Jan 09, 2018 at 04:18:39PM +0300, Dmitry Osipenko wrote:
> On 09.01.2018 05:34, Ming Lei wrote:
> > On Tue, Jan 09, 2018 at 12:09:27AM +0300, Dmitry Osipenko wrote:
> >> On 18.12.2017 15:22, Ming Lei wrote:
> >>> When merging one bvec into segment, if the
On Tue, Jan 09, 2018 at 08:02:53PM +0300, Dmitry Osipenko wrote:
> On 09.01.2018 17:33, Ming Lei wrote:
> > On Tue, Jan 09, 2018 at 04:18:39PM +0300, Dmitry Osipenko wrote:
> >> On 09.01.2018 05:34, Ming Lei wrote:
> >>> On Tue, Jan 09, 2018 at 12:09:27AM +0300, Dm
is a dm-mpath queue.
>
> There seems to be something wrong in hctx->nr_active.
Then looks it is same with the issue I saw during starting multipathd, and the
following patch should fix that, if there isn't other issue.
https://marc.info/?l=linux-block&m=151586577400558&w=2
--
Ming Lei
On Sun, Jan 14, 2018 at 06:40:40PM -0500, Laurence Oberman wrote:
> On Thu, 2018-01-04 at 14:32 -0800, Vinson Lee wrote:
> > Hi.
> >
> > HP ProLiant DL360p Gen8 with Smart Array P420i boots to the login
> > prompt and hangs with Linux 4.13 or later. I cannot log in on console
> > or SSH into the m
On Mon, Jan 15, 2018 at 10:25:01AM -0500, Mike Snitzer wrote:
> On Mon, Jan 15 2018 at 8:27am -0500,
> Stephen Rothwell wrote:
>
> > Hi all,
> >
> > Commit
> >
> > 34e1467da673 ("Revert "genirq/affinity: assign vectors to all possible
> > CPUs"")
> >
> > is missing a Signed-off-by from its
e function, and prepares
for the fix done in 2nd patch.
The 2nd patch fixes the issue by trying to make sure online CPUs assigned
to irq vector.
Ming Lei (2):
genirq/affinity: move irq vectors spread into one function
genirq/affinity: try best to make sure online CPU is assigned to
vect
: Thomas Gleixner
Cc: Christoph Hellwig
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 56 +++
1 file changed, 34 insertions(+), 22 deletions(-)
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index a37a3b4b6342..99eb38a4cc83 100644
spread irq vectors:
1) spread irq vectors across offline CPUs in the node cpumask
2) spread irq vectors across online CPUs in the node cpumask
Fixes: 84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs")
Cc: Thomas Gleixner
Cc: Christoph Hellwig
Reported-by: Laurence Oberman
On Mon, Jan 15, 2018 at 09:40:36AM -0800, Christoph Hellwig wrote:
> On Tue, Jan 16, 2018 at 12:03:43AM +0800, Ming Lei wrote:
> > Hi,
> >
> > These two patches fixes IO hang issue reported by Laurence.
> >
> > 84676c1f21 ("genirq/affinity: assign vectors to
On Mon, Jan 15, 2018 at 06:43:47PM +0100, Thomas Gleixner wrote:
> On Tue, 16 Jan 2018, Ming Lei wrote:
> > These two patches fixes IO hang issue reported by Laurence.
> >
> > 84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs")
> > may cause o
On Mon, Jan 15, 2018 at 09:40:36AM -0800, Christoph Hellwig wrote:
> On Tue, Jan 16, 2018 at 12:03:43AM +0800, Ming Lei wrote:
> > Hi,
> >
> > These two patches fixes IO hang issue reported by Laurence.
> >
> > 84676c1f21 ("genirq/affinity: assign vectors to
Hi Jianchao,
On Tue, Jan 16, 2018 at 06:12:09PM +0800, jianchao.wang wrote:
> Hi Ming
>
> On 01/12/2018 10:53 AM, Ming Lei wrote:
> > From: Christoph Hellwig
> >
> > The previous patch assigns interrupt vectors to all possible CPUs, so
> > now hctx can be ma
On Tue, Jan 16, 2018 at 12:25:19PM +0100, Thomas Gleixner wrote:
> On Tue, 16 Jan 2018, Ming Lei wrote:
>
> > On Mon, Jan 15, 2018 at 09:40:36AM -0800, Christoph Hellwig wrote:
> > > On Tue, Jan 16, 2018 at 12:03:43AM +0800, Ming Lei wrote:
> > > > Hi,
> >
On Tue, Jan 16, 2018 at 10:31:42PM +0800, jianchao.wang wrote:
> Hi minglei
>
> On 01/16/2018 08:10 PM, Ming Lei wrote:
> >>> - next_cpu = cpumask_next(hctx->next_cpu, hctx->cpumask);
> >>> + next_cpu = cpumask
On Tue, Jan 16, 2018 at 03:22:18PM +, Don Brace wrote:
> > -Original Message-
> > From: Laurence Oberman [mailto:lober...@redhat.com]
> > Sent: Tuesday, January 16, 2018 7:29 AM
> > To: Thomas Gleixner ; Ming Lei
> > Cc: Christoph Hellwig ; Jens Axboe ;
&
On Fri, Mar 09, 2018 at 10:24:45AM -0700, Keith Busch wrote:
> On Thu, Mar 08, 2018 at 08:42:20AM +0100, Christoph Hellwig wrote:
> >
> > So I suspect we'll need to go with a patch like this, just with a way
> > better changelog.
>
> I have to agree this is required for that use case. I'll run so
On Tue, Mar 13, 2018 at 09:38:41AM +0200, Artem Bityutskiy wrote:
> On Tue, 2018-03-13 at 11:11 +0800, Dou Liyang wrote:
> > I also
> > met the situation that BIOS told to ACPI that it could support
> > physical
> > CPUs hotplug, But actually, there was no hardware slots in the
> > machine.
> > th
eld by swapper/0/0:
> [ 2.170658] #0: (&(&dq->lock)->rlock){..-.}, at: [<b45eaf9e>]
> dasd_block_tasklet+0x1cc/0x480
> [ 2.170676] #1: (rcu_read_lock){}, at: [<bc7fa045>]
> hctx_lock+0x34/0x110
> [ 2.170750] Last Breaking-Event-Addres
Hi Thomas,
On Wed, Apr 04, 2018 at 09:38:26PM +0200, Thomas Gleixner wrote:
> On Wed, 4 Apr 2018, Ming Lei wrote:
> > On Wed, Apr 04, 2018 at 10:25:16AM +0200, Thomas Gleixner wrote:
> > > In the example above:
> > >
> > > > > > irq 39, cpu
On Fri, Apr 06, 2018 at 11:49:47PM +0200, Thomas Gleixner wrote:
> On Fri, 6 Apr 2018, Thomas Gleixner wrote:
>
> > On Fri, 6 Apr 2018, Ming Lei wrote:
> > >
> > > I will post V4 soon by using cpu_present_mask in the 1st stage irq spread.
> > > And it
On Tue, Apr 10, 2018 at 08:45:25PM -0400, Wakko Warner wrote:
> Ming Lei wrote:
> > Sure, thanks for your sharing.
> >
> > Wakko, could you test the following patch and see if there is any
> > difference?
> >
> > --
> > diff --git a/drivers/tar
On Thu, Apr 12, 2018 at 09:43:02PM -0400, Wakko Warner wrote:
> Ming Lei wrote:
> > On Tue, Apr 10, 2018 at 08:45:25PM -0400, Wakko Warner wrote:
> > > Sorry for the delay. I reverted my change, added this one. I didn't
> > > reboot, I just unloaded and loaded t
On Wed, Mar 21, 2018 at 12:00:49PM +0100, Marta Rybczynska wrote:
> NVMe driver uses threads for the work at device reset, including enabling
> the PCIe device. When multiple NVMe devices are initialized, their reset
> works may be scheduled in parallel. Then pci_enable_device_mem can be
> called i
On Wed, Mar 21, 2018 at 01:10:31PM +0100, Marta Rybczynska wrote:
> > On Wed, Mar 21, 2018 at 12:00:49PM +0100, Marta Rybczynska wrote:
> >> NVMe driver uses threads for the work at device reset, including enabling
> >> the PCIe device. When multiple NVMe devices are initialized, their reset
> >> w
Hi Li Wang,
On Mon, Apr 09, 2018 at 06:18:04PM +0800, Li Wang wrote:
> Hi,
>
> I got this BUG_ON() on s390x platform with kernel-v4.16.0.
Today I saw this bug too, from my first look, seems it is because
that get_max_io_size() returns zero in blk_bio_segment_split().
And I trigger that in one M
On Mon, Apr 09, 2018 at 06:18:04PM +0800, Li Wang wrote:
> Hi,
>
> I got this BUG_ON() on s390x platform with kernel-v4.16.0.
>
> [1.200196] [ cut here ]
> [1.200201] kernel BUG at block/bio.c:1798!
> [1.200228] illegal operation: 0001 ilc:1 [#1] SMP
> [1.2
5804bac60eb58b145839b5893e
> > Author: Ming Lei
> > Date: Fri Nov 11 20:05:32 2016 +0800
> >
> > target: avoid accessing .bi_vcnt directly
> >
> > When the bio is full, bio_add_pc_page() will return zero,
> > so use this information tell wh
On Mon, Apr 09, 2018 at 07:43:01PM -0400, Wakko Warner wrote:
> Ming Lei wrote:
> > On Mon, Apr 09, 2018 at 09:30:11PM +, Bart Van Assche wrote:
> > > Hello Ming,
> > >
> > > Can you have a look at this? The start of this e-mail thread is available
> &g
rong git tree, please drop us a note to
> help improve the system]
>
> url:
> https://github.com/0day-ci/linux/commits/Ming-Lei/genirq-affinity-irq-vector-spread-among-online-CPUs-as-far-as-possible/20180305-184912
> config: i386-randconfig-a1-201809 (attached as .config)
> c
r into one
prep patch
- add Reviewed-by tag
Thanks
Ming
Ming Lei (4):
genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask
genirq/affinity: move actual irq vector spread into one helper
genirq/affinity: support to do irq vectors spread starting from any
The following patches will introduce two stage irq spread for improving
irq spread on all possible CPUs.
No funtional change.
Cc: Thomas Gleixner
Reviewed-by: Christoph Hellwig
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 26 +-
1 file changed, 13 insertions
No functional change, just prepare for converting to 2-stage
irq vector spread.
Cc: Thomas Gleixner
Reviewed-by: Christoph Hellwig
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 97 +--
1 file changed, 55 insertions(+), 42 deletions
nt 3) isn't the optimal result from NUMA view, but it
returns more irq vectors with online CPU mapped, given in reality one CPU
should be enough to handle one irq vector, so it is better to do this way.
Cc: Thomas Gleixner
Reviewed-by: Christoph Hellwig
Reported-by: Laurence Oberman
Signed
d among all
possible CPUs.
Cc: Thomas Gleixner
Reviewed-by: Christoph Hellwig
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 23 +++
1 file changed, 15 insertions(+), 8 deletions(-)
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index e119e86bed48..616f0
ontroller is resetted successfully,
these request will be dispatched again.
So please keep the name of 'cancel' or use sort of name.
Thanks,
Ming Lei
On Thu, Mar 08, 2018 at 03:18:33PM +0200, Artem Bityutskiy wrote:
> On Thu, 2018-03-08 at 18:53 +0800, Ming Lei wrote:
> > Hi,
> >
> > This patchset tries to spread among online CPUs as far as possible, so
> > that we can avoid to allocate too less irq vectors
Hi Thomas,
On Fri, Mar 09, 2018 at 12:20:09AM +0100, Thomas Gleixner wrote:
> On Thu, 8 Mar 2018, Ming Lei wrote:
> > Actually, it isn't a real fix, the real one is in the following two:
> >
> > 0c20244d458e scsi: megaraid_sas: fix selection of reply queue
>
On Fri, Mar 09, 2018 at 09:00:08AM +0200, Artem Bityutskiy wrote:
> On Fri, 2018-03-09 at 09:24 +0800, Ming Lei wrote:
> > Hi Thomas,
> >
> > On Fri, Mar 09, 2018 at 12:20:09AM +0100, Thomas Gleixner wrote:
> > > On Thu, 8 Mar 2018, Ming Lei wrote:
> > > &g
On Fri, Mar 09, 2018 at 11:08:54AM +0100, Thomas Gleixner wrote:
> On Fri, 9 Mar 2018, Ming Lei wrote:
> > On Fri, Mar 09, 2018 at 12:20:09AM +0100, Thomas Gleixner wrote:
> > > On Thu, 8 Mar 2018, Ming Lei wrote:
> > > > Actually, it isn't a real fix, t
t;*/
> pci_free_irq_vectors(pdev);
> - nr_io_queues = pci_alloc_irq_vectors(pdev, 1, nr_io_queues,
> - PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY);
> - if (nr_io_queues <= 0)
> + ret = pci_alloc_irq_vectors_affinity(pdev, 1, (nr_io_queues + 1),
&g
On Tue, Mar 13, 2018 at 02:08:23PM +0100, Martin Steigerwald wrote:
> Hans de Goede - 11.03.18, 15:37:
> > Hi Martin,
> >
> > On 11-03-18 09:20, Martin Steigerwald wrote:
> > > Hello.
> > >
> > > Since 4.16-rc4 (upgraded from 4.15.2 which worked) I have an issue
> > > with SMART checks occassiona
Commit 7759eb23fd980 ("block: remove bio_rewind_iter()") removes
bio_rewind_iter(), then no one uses bvec_iter_rewind() any more,
so remove it.
Signed-off-by: Ming Lei
---
include/linux/bvec.h | 24
1 file changed, 24 deletions(-)
diff --git a/include/linu
On Wed, Nov 21, 2018 at 05:08:11PM +0100, Christoph Hellwig wrote:
> On Wed, Nov 21, 2018 at 11:06:11PM +0800, Ming Lei wrote:
> > bvec_iter_* is used for single-page bvec in current linus tree, and there
> > are
> > lots of users now:
> >
> > [linux]$
On Tue, Oct 30, 2018 at 04:06:24PM -0700, Evan Green wrote:
> If the backing device for a loop device is a block device,
> then mirror the discard properties of the underlying block
> device into the loop device. While in there, differentiate
> between REQ_OP_DISCARD and REQ_OP_WRITE_ZEROES, which
On Fri, Nov 16, 2018 at 02:30:28PM +0100, Christoph Hellwig wrote:
> > +static inline void __bio_advance_iter(struct bio *bio, struct bvec_iter
> > *iter,
> > + unsigned bytes, bool mp)
>
> I think these magic 'bool np' arguments and wrappers over wrapper
> don't h
On Sun, Nov 18, 2018 at 08:10:14PM -0700, Jens Axboe wrote:
> On 11/18/18 7:23 PM, Ming Lei wrote:
> > On Fri, Nov 16, 2018 at 02:13:05PM +0100, Christoph Hellwig wrote:
> >>> -#define bvec_iter_page(bvec, iter) \
> >>> +#de
On Thu, Nov 15, 2018 at 12:20:28PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:52:50PM +0800, Ming Lei wrote:
> > First it is more efficient to use bio_for_each_bvec() in both
> > blk_bio_segment_split() and __blk_recalc_rq_segments() to compute how
> > many multi
On Fri, Nov 16, 2018 at 02:33:14PM +0100, Christoph Hellwig wrote:
> > + if (!*sg)
> > + return sglist;
> > + else {
>
> No need for an else after an early return.
OK, good catch!
Thanks,
Ming
On Thu, Nov 15, 2018 at 03:23:56PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:52:52PM +0800, Ming Lei wrote:
> > BTRFS and guard_bio_eod() need to get the last singlepage segment
> > from one multipage bvec, so introduce this helper to make them happy.
> >
On Fri, Nov 16, 2018 at 02:37:10PM +0100, Christoph Hellwig wrote:
> On Thu, Nov 15, 2018 at 04:52:54PM +0800, Ming Lei wrote:
> > index 2955a4ea2fa8..161e14b8b180 100644
> > --- a/fs/btrfs/compression.c
> > +++ b/fs/btrfs/compression.c
> > @@ -40
On Thu, Nov 15, 2018 at 04:23:56PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:52:55PM +0800, Ming Lei wrote:
> > BTRFS is the only user of this helper, so move this helper into
> > BTRFS, and implement it via bio_for_each_segment_all(), since
> > bio->bi_vcnt
On Fri, Nov 16, 2018 at 02:38:45PM +0100, Christoph Hellwig wrote:
> On Thu, Nov 15, 2018 at 04:52:55PM +0800, Ming Lei wrote:
> > BTRFS is the only user of this helper, so move this helper into
> > BTRFS, and implement it via bio_for_each_segment_all(), since
> > bio->
On Fri, Nov 16, 2018 at 02:45:41PM +0100, Christoph Hellwig wrote:
> On Thu, Nov 15, 2018 at 04:52:56PM +0800, Ming Lei wrote:
> > There are still cases in which we need to use bio_bvecs() for get the
> > number of multi-page segment, so introduce it.
>
> The only user in you
On Thu, Nov 15, 2018 at 04:40:22PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:52:57PM +0800, Ming Lei wrote:
> > iov_iter is implemented with bvec itererator, so it is safe to pass
> > multipage bvec to it, and this way is much more efficient than
> > passing
On Thu, Nov 15, 2018 at 04:44:02PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:52:58PM +0800, Ming Lei wrote:
> > bch_bio_alloc_pages() is always called on one new bio, so it is safe
> > to access the bvec table directly. Given it is the only kind of this
> > cas
On Fri, Nov 16, 2018 at 02:46:45PM +0100, Christoph Hellwig wrote:
> > - bio_for_each_segment_all(bv, bio, i) {
> > + for (i = 0, bv = bio->bi_io_vec; i < bio->bi_vcnt; bv++) {
>
> This really needs a comment. Otherwise it looks fine to me.
OK, will do it in next version.
Thanks,
Ming
On Thu, Nov 15, 2018 at 01:42:52PM +0100, David Sterba wrote:
> On Thu, Nov 15, 2018 at 04:52:59PM +0800, Ming Lei wrote:
> > diff --git a/block/blk-zoned.c b/block/blk-zoned.c
> > index 13ba2011a306..789b09ae402a 100644
> > --- a/block/blk-zoned.c
> > +++ b/block/blk-z
On Thu, Nov 15, 2018 at 05:22:45PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:52:59PM +0800, Ming Lei wrote:
> > This patch introduces one extra iterator variable to
> > bio_for_each_segment_all(),
> > then we can allow bio_for_each_segment_all() to iterate over
On Thu, Nov 15, 2018 at 05:46:58PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:53:00PM +0800, Ming Lei wrote:
> > After multi-page is enabled, one new page may be merged to a segment
> > even though it is a new added page.
> >
> > This patch deals with
On Fri, Nov 16, 2018 at 02:49:36PM +0100, Christoph Hellwig wrote:
> I'd much rather have __bio_try_merge_page only do merges in
> the same page, and have a new __bio_try_merge_segment that does
> multi-page merges. This will keep the accounting a lot simpler.
Looks this way is clever, will do it
On Thu, Nov 15, 2018 at 05:56:27PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:53:01PM +0800, Ming Lei wrote:
> > This patch pulls the trigger for multi-page bvecs.
> >
> > Now any request queue which supports queue cluster will see multi-page
> > bvecs
On Fri, Nov 16, 2018 at 02:53:08PM +0100, Christoph Hellwig wrote:
> > -
> > - if (page == bv->bv_page && off == bv->bv_offset + bv->bv_len) {
> > - bv->bv_len += len;
> > - bio->bi_iter.bi_size += len;
> > - return true;
> > -
On Thu, Nov 15, 2018 at 05:59:36PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:53:02PM +0800, Ming Lei wrote:
> > Now multi-page bvec can cover CONFIG_THP_SWAP, so we don't need to
> > increase BIO_MAX_PAGES for it.
>
> You mentioned to it in the cover l
On Thu, Nov 15, 2018 at 06:11:40PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:53:04PM +0800, Ming Lei wrote:
> > It is wrong to use bio->bi_vcnt to figure out how many segments
> > there are in the bio even though CLONED flag isn't set on this bio,
>
On Thu, Nov 15, 2018 at 06:18:11PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:53:05PM +0800, Ming Lei wrote:
> > Since bdced438acd83ad83a6c ("block: setup bi_phys_segments after
> > splitting"),
> > physical segment number is mainly figured out in blk_
On Fri, Nov 16, 2018 at 02:58:03PM +0100, Christoph Hellwig wrote:
> On Thu, Nov 15, 2018 at 04:53:05PM +0800, Ming Lei wrote:
> > Since bdced438acd83ad83a6c ("block: setup bi_phys_segments after
> > splitting"),
> > physical segment number is mainly figured out in
On Sat, Nov 10, 2018 at 12:06:35AM -0800, Vito Caputo wrote:
> I ask because I recently performed some fstrims on my 4.19-running
> laptop after a good house cleaning, and things started going rather
> haywire today at the filesystem level, on different filesystems of
> differing types (ext2 and ex
On Tue, Nov 13, 2018 at 08:22:26AM +0800, Ming Lei wrote:
> On Mon, Nov 12, 2018 at 12:02:36PM -0800, Greg Kroah-Hartman wrote:
> > On Mon, Nov 12, 2018 at 08:48:48AM -0800, Guenter Roeck wrote:
> > > On Mon, Nov 12, 2018 at 05:44:07PM +0800, Ming Lei wrote:
> > > >
Once multi-page bvec is enabled, the last bvec may include more than one
page, this patch use bvec_last_segment() to truncate the bio.
Cc: Christoph Hellwig
Cc: linux-fsde...@vger.kernel.org
Signed-off-by: Ming Lei
---
fs/buffer.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff
This patch pulls the trigger for multi-page bvecs.
Now any request queue which supports queue cluster will see multi-page
bvecs.
Cc: Christoph Hellwig
Signed-off-by: Ming Lei
---
block/bio.c | 24 ++--
1 file changed, 18 insertions(+), 6 deletions(-)
diff --git a/block
079] EXT4-fs (sdb1): ext4_writepages: jbd2_start: 1024
pages, ino 407195; err -30
Thanks,
Ming Lei
fo/?l=linux-mm&m=147745525801433&w=2
[5], http://marc.info/?t=14956948457&r=1&w=2
[6], http://marc.info/?t=14982021534&r=1&w=2
Ming Lei (33):
block: rename bio_for_each_segment* with bio_for_each_page*
block: rename rq_for_each_segment as rq_for_each_pa
bvec
is in, each bvec will store a real multipage segment, so people won't be
confused with these wrong names.
Signed-off-by: Ming Lei
---
Documentation/block/biovecs.txt | 4 ++--
arch/m68k/emu/nfblock.c | 2 +-
arch/xtensa/platforms/iss/simdisk.c | 2 +-
block/bio-integr
This helper is used to iterate multipage bvec for bio spliting/merge,
and it is required in bio_clone_bioset() too, so introduce it.
Signed-off-by: Ming Lei
---
include/linux/bio.h | 34 +++---
include/linux/bvec.h | 36
2 files
rq_for_each_segment() still deceives us since this helper only returns
one page in each bvec, so fixes its name.
Signed-off-by: Ming Lei
---
Documentation/block/biodoc.txt | 6 +++---
block/blk-core.c | 2 +-
drivers/block/floppy.c | 4 ++--
drivers/block/loop.c
bio_segments() never returns count of actual segment, just like
original bio_for_each_segment(), so rename it as bio_pages().
Signed-off-by: Ming Lei
---
block/bio.c| 2 +-
block/blk-merge.c | 2 +-
drivers/block/loop.c | 4 ++--
drivers/md
supporting current
bvec iterator which is thought as singlepage only by drivers, fs, dm and
etc. These helpers will build singlepage bvec in flight, so users of
current bio/bvec iterator still can work well and needn't change even
though we store real multipage segment into bvec table.
Signed-o
Preparing for supporting multipage bvec.
Cc: Chris Mason
Cc: Josef Bacik
Cc: David Sterba
Cc: linux-bt...@vger.kernel.org
Signed-off-by: Ming Lei
---
fs/btrfs/compression.c | 5 -
fs/btrfs/extent_io.c | 5 +++--
2 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/fs/btrfs
max segment size,
so we have to split the big bvec into several segments.
Thirdly during splitting multipage bvec into segments, max segment number
may be reached, then the bio need to be splitted when this happens.
Signed-off-by: Ming Lei
---
block/blk-merge.c | 90
As multipage bvec will be enabled soon, bio->bi_vcnt isn't same with
page count in the bio any more, so use bio_for_each_page_all() to
compute the number.
Signed-off-by: Ming Lei
---
include/linux/bio.h | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/inclu
It is more efficient to use bio_for_each_segment() to map sg, meantime
we have to consider splitting multipage bvec as done in blk_bio_segment_split().
Signed-off-by: Ming Lei
---
block/blk-merge.c | 72 +++
1 file changed, 52 insertions
BTRFS and guard_bio_eod() need to get the last page from one segment, so
introduce this helper to make them happy.
Signed-off-by: Ming Lei
---
include/linux/bvec.h | 22 ++
1 file changed, 22 insertions(+)
diff --git a/include/linux/bvec.h b/include/linux/bvec.h
index
Once multipage bvec is enabled, the last bvec may include more than one
page, this patch use segment_last_page() to truncate the bio.
Signed-off-by: Ming Lei
---
fs/buffer.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index 249b83fafe48
iov_iter is implemented with bvec itererator, so it is safe to pass
segment to it, and this way is much more efficient than passing one
page in each bvec.
Signed-off-by: Ming Lei
---
drivers/block/loop.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/block
There are still cases in which we need to use bio_segments() for get the
number of segment, so introduce it.
Signed-off-by: Ming Lei
---
include/linux/bio.h | 25 -
1 file changed, 20 insertions(+), 5 deletions(-)
diff --git a/include/linux/bio.h b/include/linux/bio.h
r
updating bvec table directly, and users should be carful about this
helper since it returns real multipage segment now.
Signed-off-by: Ming Lei
---
include/linux/bio.h | 18 ++
include/linux/bvec.h | 6 ++
2 files changed, 24 insertions(+)
diff --git a/include/linux/bio.h
There are still cases in which rq_for_each_segment() is required, for
example, loop.
Signed-off-by: Ming Lei
---
include/linux/blkdev.h | 4
1 file changed, 4 insertions(+)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 1e8e9b430008..0b15bc625bd7 100644
--- a/include
There is one use case(DM) which requires to clone bio segment by
segement, so introduce this API.
Signed-off-by: Ming Lei
---
block/bio.c | 56 +++--
include/linux/bio.h | 1 +
2 files changed, 43 insertions(+), 14 deletions(-)
diff
the cloned multipage bio.
Signed-off-by: Ming Lei
---
drivers/md/dm.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index f1db181e082e..425e99e20f5c 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1581,8 +1581,8 @@ stati
release all these pages if all
are dirtied, otherwise dirty them all in deferred wrokqueue.
This patch introduces segment_for_each_page_all() to deal with the case
a bit easier.
Signed-off-by: Ming Lei
---
block/bio.c | 45 +
include/linux
We have to convert to bio_for_each_page_all2() for iterating page by
page.
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled.
Signed-off-by: Ming Lei
---
block/bio.c | 18 --
block/blk-zoned.c | 5 +++--
block/bounce.c
nabled, so we have to convert to bio_for_each_page_all2().
Signed-off-by: Ming Lei
---
drivers/md/bcache/btree.c | 3 ++-
drivers/md/bcache/util.c | 2 +-
drivers/md/dm-crypt.c | 3 ++-
drivers/md/raid1.c| 3 ++-
4 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/drivers
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled, so we have to convert to bio_for_each_page_all2().
Signed-off-by: Ming Lei
---
fs/block_dev.c | 6 --
fs/crypto/bio.c | 3 ++-
fs/direct-io.c | 4 +++-
fs/iomap.c | 3 ++-
fs/mpage.c | 3 ++-
5
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled, so we have to convert to bio_for_each_page_all2().
Given bvec can't be changed under bio_for_each_page_all2(), this patch
marks the bvec parameter as 'const' for xfs_finish_page_writeback().
Sig
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled, so we have to convert to bio_for_each_page_all2().
Signed-off-by: Ming Lei
---
fs/exofs/ore.c | 3 ++-
fs/exofs/ore_raid.c | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/fs/exofs/
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled, so we have to convert to bio_for_each_page_all2().
Given bvec can't be changed inside bio_for_each_page_all2(), this patch
marks the bvec parameter as 'const' for gfs2_end_log_write_bh().
Sig
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled, so we have to convert to bio_for_each_page_all2().
Signed-off-by: Ming Lei
---
fs/btrfs/compression.c | 3 ++-
fs/btrfs/disk-io.c | 3 ++-
fs/btrfs/extent_io.c | 9 ++---
fs/btrfs/inode.c
601 - 700 of 2944 matches
Mail list logo