> > > > > > > > > > Enable asynchronous flush for virtio pmem using work queue. Also, > > > > coalesce the flush requests when a flush is already in process. > > > > This functionality is copied from md/RAID code. > > > > > > > > When a flush is already in process, new flush requests wait till > > > > previous flush completes in another context (work queue). For all > > > > the requests come between ongoing flush and new flush start time, only > > > > single flush executes, thus adhers to flush coalscing logic. This is > > > > > > s/adhers/adheres/ > > > > > > s/coalscing/coalescing/ > > > > > > > important for maintaining the flush request order with request > > > > coalscing. > > > > > > s/coalscing/coalescing/ > > > > o.k. Sorry for the spelling mistakes. > > > > > > > > > > > > > Signed-off-by: Pankaj Gupta <pankaj.gupta.li...@gmail.com> > > > > --- > > > > drivers/nvdimm/nd_virtio.c | 74 +++++++++++++++++++++++++++--------- > > > > drivers/nvdimm/virtio_pmem.c | 10 +++++ > > > > drivers/nvdimm/virtio_pmem.h | 16 ++++++++ > > > > 3 files changed, 83 insertions(+), 17 deletions(-) > > > > > > > > diff --git a/drivers/nvdimm/nd_virtio.c b/drivers/nvdimm/nd_virtio.c > > > > index 10351d5b49fa..179ea7a73338 100644 > > > > --- a/drivers/nvdimm/nd_virtio.c > > > > +++ b/drivers/nvdimm/nd_virtio.c > > > > @@ -100,26 +100,66 @@ static int virtio_pmem_flush(struct nd_region > > > > *nd_region) > > > > /* The asynchronous flush callback function */ > > > > int async_pmem_flush(struct nd_region *nd_region, struct bio *bio) > > > > { > > > > - /* > > > > - * Create child bio for asynchronous flush and chain with > > > > - * parent bio. Otherwise directly call nd_region flush. > > > > + /* queue asynchronous flush and coalesce the flush requests */ > > > > + struct virtio_device *vdev = nd_region->provider_data; > > > > + struct virtio_pmem *vpmem = vdev->priv; > > > > + ktime_t req_start = ktime_get_boottime(); > > > > + int ret = -EINPROGRESS; > > > > + > > > > + spin_lock_irq(&vpmem->lock); > > > > > > Why a new lock and not continue to use ->pmem_lock? > > > > This spinlock is to protect entry in 'wait_event_lock_irq' > > and the Other spinlock is to protect virtio queue data. > > Understood, but md shares the mddev->lock for both purposes, so I > would ask that you either document what motivates the locking split, > or just reuse the lock until a strong reason to split them arises.
O.k. Will check again if we could use same lock Or document it. Thanks, Pankaj