[LSF/MM ATTEND] I would like to attend to discuss several matters

2018-01-26 Thread Lee Duncan
I am interested in attending to help discuss all things storage. In particular, I am very interested in the "Patch Submission process and Handling Internal Conflict" proposed by James, and in "Improving Asynchronous SCSI Disk Probing" proposed by Bart. Lastly, I am also interested in discussing b

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-20 Thread Johannes Thumshirn
On Tue, Jan 17, 2017 at 05:45:53PM +0200, Sagi Grimberg wrote: > > >-- > >[1] > >queue = b'nvme0q1' > > usecs : count distribution > > 0 -> 1 : 7310 || > > 2 -> 3 : 11 | | > > 4 -> 7

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-19 Thread Jens Axboe
On 01/18/2017 06:02 AM, Hannes Reinecke wrote: > On 01/17/2017 05:50 PM, Sagi Grimberg wrote: >> >>> So it looks like we are super not efficient because most of the >>> times we catch 1 >>> completion per interrupt and the whole point is that we need to find >>> more! This fio >>>

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-19 Thread Hannes Reinecke
On 01/19/2017 11:57 AM, Ming Lei wrote: > On Wed, Jan 11, 2017 at 11:07 PM, Jens Axboe wrote: >> On 01/11/2017 06:43 AM, Johannes Thumshirn wrote: >>> Hi all, >>> >>> I'd like to attend LSF/MM and would like to discuss polling for block >>> drivers. >>> >>> Currently there is blk-iopoll but it is

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-19 Thread Ming Lei
On Wed, Jan 11, 2017 at 11:07 PM, Jens Axboe wrote: > On 01/11/2017 06:43 AM, Johannes Thumshirn wrote: >> Hi all, >> >> I'd like to attend LSF/MM and would like to discuss polling for block >> drivers. >> >> Currently there is blk-iopoll but it is neither as widely used as NAPI in the >> network

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-19 Thread Sagi Grimberg
Christoph suggest to me once that we can take a hybrid approach where we consume a small amount of completions (say 4) right away from the interrupt handler and if we have more we schedule irq-poll to reap the rest. But back then it didn't work better which is not aligned with my observations that

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-19 Thread Johannes Thumshirn
On Thu, Jan 19, 2017 at 10:23:28AM +0200, Sagi Grimberg wrote: > Christoph suggest to me once that we can take a hybrid > approach where we consume a small amount of completions (say 4) > right away from the interrupt handler and if we have more > we schedule irq-poll to reap the rest. But back the

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-19 Thread Johannes Thumshirn
On Thu, Jan 19, 2017 at 10:12:17AM +0200, Sagi Grimberg wrote: > > >>>I think you missed: > >>>http://git.infradead.org/nvme.git/commit/49c91e3e09dc3c9dd1718df85112a8cce3ab7007 > >> > >>I indeed did, thanks. > >> > >But it doesn't help. > > > >We're still having to wait for the first interrupt, an

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-19 Thread Sagi Grimberg
I think you missed: http://git.infradead.org/nvme.git/commit/49c91e3e09dc3c9dd1718df85112a8cce3ab7007 I indeed did, thanks. But it doesn't help. We're still having to wait for the first interrupt, and if we're really fast that's the only completion we have to process. Try this: diff --gi

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-18 Thread Hannes Reinecke
On 01/18/2017 04:16 PM, Johannes Thumshirn wrote: > On Wed, Jan 18, 2017 at 05:14:36PM +0200, Sagi Grimberg wrote: >> >>> Hannes just spotted this: >>> static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx, >>> const struct blk_mq_queue_data *bd) >>> { >>> [...] >>>__n

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-18 Thread Andrey Kuzmin
On Wed, Jan 18, 2017 at 5:40 PM, Sagi Grimberg wrote: > >> Your report provided this stats with one-completion dominance for the >> single-threaded case. Does it also hold if you run multiple fio >> threads per core? > > > It's useless to run more threads on that core, it's already fully > utilize

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-18 Thread Johannes Thumshirn
On Wed, Jan 18, 2017 at 05:14:36PM +0200, Sagi Grimberg wrote: > > >Hannes just spotted this: > >static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx, > > const struct blk_mq_queue_data *bd) > >{ > >[...] > >__nvme_submit_cmd(nvmeq, &cmnd); > >nvme_process_cq

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-18 Thread Sagi Grimberg
Hannes just spotted this: static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx, const struct blk_mq_queue_data *bd) { [...] __nvme_submit_cmd(nvmeq, &cmnd); nvme_process_cq(nvmeq); spin_unlock_irq(&nvmeq->q_lock); return BLK_MQ_RQ_QUEUE_OK;

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-18 Thread Johannes Thumshirn
On Wed, Jan 18, 2017 at 04:27:24PM +0200, Sagi Grimberg wrote: > > >So what you say is you saw a consomed == 1 [1] most of the time? > > > >[1] from > >http://git.infradead.org/nvme.git/commitdiff/eed5a9d925c59e43980047059fde29e3aa0b7836 > > Exactly. By processing 1 completion per interrupt it m

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-18 Thread Sagi Grimberg
Your report provided this stats with one-completion dominance for the single-threaded case. Does it also hold if you run multiple fio threads per core? It's useless to run more threads on that core, it's already fully utilized. That single threads is already posting a fair amount of submission

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-18 Thread Andrey Kuzmin
On Wed, Jan 18, 2017 at 5:27 PM, Sagi Grimberg wrote: > >> So what you say is you saw a consomed == 1 [1] most of the time? >> >> [1] from >> http://git.infradead.org/nvme.git/commitdiff/eed5a9d925c59e43980047059fde29e3aa0b7836 > > > Exactly. By processing 1 completion per interrupt it makes perfe

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-18 Thread Sagi Grimberg
So what you say is you saw a consomed == 1 [1] most of the time? [1] from http://git.infradead.org/nvme.git/commitdiff/eed5a9d925c59e43980047059fde29e3aa0b7836 Exactly. By processing 1 completion per interrupt it makes perfect sense why this performs poorly, it's not worth paying the soft-ir

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-18 Thread Hannes Reinecke
On 01/17/2017 05:50 PM, Sagi Grimberg wrote: > >> So it looks like we are super not efficient because most of the >> times we catch 1 >> completion per interrupt and the whole point is that we need to find >> more! This fio >> is single threaded with QD=32 so I'd expect that we

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-18 Thread Johannes Thumshirn
On Tue, Jan 17, 2017 at 06:38:43PM +0200, Sagi Grimberg wrote: > > >Just for the record, all tests you've run are with the upper irq_poll_budget > >of > >256 [1]? > > Yes, but that's the point, I never ever reach this budget because > I'm only processing 1-2 completions per interrupt. > > >We (

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-17 Thread Sagi Grimberg
So it looks like we are super not efficient because most of the times we catch 1 completion per interrupt and the whole point is that we need to find more! This fio is single threaded with QD=32 so I'd expect that we be somewhere in 8-31 almost all the time... I also

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-17 Thread Johannes Thumshirn
On Tue, Jan 17, 2017 at 06:15:43PM +0200, Sagi Grimberg wrote: > Oh, and the current code that was tested can be found at: > > git://git.infradead.org/nvme.git nvme-irqpoll Just for the record, all tests you've run are with the upper irq_poll_budget of 256 [1]? We (Hannes and me) recently stumbe

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-17 Thread Sagi Grimberg
Just for the record, all tests you've run are with the upper irq_poll_budget of 256 [1]? Yes, but that's the point, I never ever reach this budget because I'm only processing 1-2 completions per interrupt. We (Hannes and me) recently stumbed accross this when trying to poll for more than 256

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-17 Thread Sagi Grimberg
Oh, and the current code that was tested can be found at: git://git.infradead.org/nvme.git nvme-irqpoll -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-17 Thread Sagi Grimberg
Hey, so I made some initial analysis of whats going on with irq-poll. First, I sampled how much time it takes before we get the interrupt in nvme_irq and the initial visit to nvme_irqpoll_handler. I ran a single threaded fio with QD=32 of 4K reads. This is two displays of a histogram of the laten

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-17 Thread Sagi Grimberg
-- [1] queue = b'nvme0q1' usecs : count distribution 0 -> 1 : 7310 || 2 -> 3 : 11 | | 4 -> 7 : 10 | | 8 -> 15 : 20 | | 16

[LSF/MM ATTEND] storage topics plus container filesystems and IMA

2017-01-13 Thread James Bottomley
I'd like to attend LSF/MM mostly to participate in the storage topics discussion, but also because containers is starting to need some features from the VFS. Hopefully the container uid shifting is mostly sorted out by the superblock user namespace, and I should have patches to demonstrate this in

Re: [LSF/MM TOPIC][LSF/MM ATTEND] multipath redesign

2017-01-13 Thread Hannes Reinecke
On 01/13/2017 05:07 PM, Mike Snitzer wrote: > On Fri, Jan 13 2017 at 10:56am -0500, > Hannes Reinecke wrote: > >> On 01/12/2017 06:29 PM, Benjamin Marzinski wrote: [ .. ] >>> While I've daydreamed of rewriting the multipath tools multiple times, >>> and having nothing aginst you doing it in conce

Re: [LSF/MM TOPIC][LSF/MM ATTEND] multipath redesign

2017-01-13 Thread Mike Snitzer
On Fri, Jan 13 2017 at 10:56am -0500, Hannes Reinecke wrote: > On 01/12/2017 06:29 PM, Benjamin Marzinski wrote: > > On Thu, Jan 12, 2017 at 09:27:40AM +0100, Hannes Reinecke wrote: > >> On 01/11/2017 11:23 PM, Mike Snitzer wrote: > >>> On Wed, Jan 11 2017 at 4:44am -0500, > >>> Hannes Reinecke

Re: [dm-devel] [LSF/MM TOPIC][LSF/MM ATTEND] multipath redesign

2017-01-13 Thread Hannes Reinecke
On 01/12/2017 06:29 PM, Benjamin Marzinski wrote: > On Thu, Jan 12, 2017 at 09:27:40AM +0100, Hannes Reinecke wrote: >> On 01/11/2017 11:23 PM, Mike Snitzer wrote: >>> On Wed, Jan 11 2017 at 4:44am -0500, >>> Hannes Reinecke wrote: >>> Hi all, I'd like to attend LSF/MM this year, a

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-13 Thread Johannes Thumshirn
On Wed, Jan 11, 2017 at 08:13:02AM -0700, Jens Axboe wrote: > On 01/11/2017 08:07 AM, Jens Axboe wrote: > > On 01/11/2017 06:43 AM, Johannes Thumshirn wrote: > >> Hi all, > >> > >> I'd like to attend LSF/MM and would like to discuss polling for block > >> drivers. > >> > >> Currently there is blk-

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-12 Thread Bart Van Assche
On Thu, 2017-01-12 at 10:41 +0200, Sagi Grimberg wrote: > First, when the nvme device fires an interrupt, the driver consumes > the completion(s) from the interrupt (usually there will be some more > completions waiting in the cq by the time the host start processing it). > With irq-poll, we disabl

Re: [Lsf-pc] [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-12 Thread Johannes Thumshirn
On Thu, Jan 12, 2017 at 04:41:00PM +0200, Sagi Grimberg wrote: > > >>**Note: when I ran multiple threads on more cpus the performance > >>degradation phenomenon disappeared, but I tested on a VM with > >>qemu emulation backed by null_blk so I figured I had some other > >>bottleneck somewhere (that

Re: [dm-devel] [LSF/MM TOPIC][LSF/MM ATTEND] multipath redesign

2017-01-12 Thread Benjamin Marzinski
On Thu, Jan 12, 2017 at 09:27:40AM +0100, Hannes Reinecke wrote: > On 01/11/2017 11:23 PM, Mike Snitzer wrote: > >On Wed, Jan 11 2017 at 4:44am -0500, > >Hannes Reinecke wrote: > > > >>Hi all, > >> > >>I'd like to attend LSF/MM this year, and would like to discuss a > >>redesign of the multipath

Re: [Lsf-pc] [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-12 Thread Sagi Grimberg
**Note: when I ran multiple threads on more cpus the performance degradation phenomenon disappeared, but I tested on a VM with qemu emulation backed by null_blk so I figured I had some other bottleneck somewhere (that's why I asked for some more testing). That could be because of the vmexits a

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-12 Thread Johannes Thumshirn
On Thu, Jan 12, 2017 at 01:44:05PM +0200, Sagi Grimberg wrote: [...] > Its pretty basic: > -- > [global] > group_reporting > cpus_allowed=0 > cpus_allowed_policy=split > rw=randrw > bs=4k > numjobs=4 > iodepth=32 > runtime=60 > time_based > loops=1 > ioengine=libaio > direct=1 > invalidate=1 > rand

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-12 Thread Sagi Grimberg
I agree with Jens that we'll need some analysis if we want the discussion to be affective, and I can spend some time this if I can find volunteers with high-end nvme devices (I only have access to client nvme devices. I have a P3700 but somehow burned the FW. Let me see if I can bring it back

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-12 Thread Johannes Thumshirn
On Thu, Jan 12, 2017 at 10:23:47AM +0200, Sagi Grimberg wrote: > > >>>Hi all, > >>> > >>>I'd like to attend LSF/MM and would like to discuss polling for block > >>>drivers. > >>> > >>>Currently there is blk-iopoll but it is neither as widely used as NAPI in > >>>the > >>>networking field and acc

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-12 Thread sagi grimberg
A typical Ethernet network adapter delays the generation of an interrupt after it has received a packet. A typical block device or HBA does not delay the generation of an interrupt that reports an I/O completion. >>> >>> NVMe allows for configurable interrupt coalescing, as

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-12 Thread Sagi Grimberg
I'd like to attend LSF/MM and would like to discuss polling for block drivers. Currently there is blk-iopoll but it is neither as widely used as NAPI in the networking field and accoring to Sagi's findings in [1] performance with polling is not on par with IRQ usage. On LSF/MM I'd like to whet

Re: [LSF/MM TOPIC][LSF/MM ATTEND] multipath redesign

2017-01-12 Thread Hannes Reinecke
On 01/11/2017 11:23 PM, Mike Snitzer wrote: On Wed, Jan 11 2017 at 4:44am -0500, Hannes Reinecke wrote: Hi all, I'd like to attend LSF/MM this year, and would like to discuss a redesign of the multipath handling. With recent kernels we've got quite some functionality required for multipathi

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-12 Thread Sagi Grimberg
Hi all, I'd like to attend LSF/MM and would like to discuss polling for block drivers. Currently there is blk-iopoll but it is neither as widely used as NAPI in the networking field and accoring to Sagi's findings in [1] performance with polling is not on par with IRQ usage. On LSF/MM I'd lik

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-11 Thread Stephen Bates
> > This is a separate topic. The initial proposal is for polling for > interrupt mitigation, you are talking about polling in the context of > polling for completion of an IO. > > We can definitely talk about this form of polling as well, but it should > be a separate topic and probably proposed i

[LSF/MM TOPIC][LSF/MM ATTEND] IO completion polling for block drivers

2017-01-11 Thread Stephen Bates
Hi I'd like to discuss the ongoing work in the kernel to enable high priority IO via polling for completion in the blk-mq subsystem. Given that iopoll only really makes sense for low-latency, low queue depth environments (i.e. down below 10-20us) I'd like to discuss which drivers we think will ne

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-11 Thread Stephen Bates
>> >> I'd like to attend LSF/MM and would like to discuss polling for block >> drivers. >> >> Currently there is blk-iopoll but it is neither as widely used as NAPI >> in the networking field and accoring to Sagi's findings in [1] >> performance with polling is not on par with IRQ usage. >> >> On L

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-11 Thread Jens Axboe
On 01/11/2017 09:36 PM, Stephen Bates wrote: >>> >>> I'd like to attend LSF/MM and would like to discuss polling for block >>> drivers. >>> >>> Currently there is blk-iopoll but it is neither as widely used as NAPI >>> in the networking field and accoring to Sagi's findings in [1] >>> performance w

Re: [LSF/MM TOPIC][LSF/MM ATTEND] multipath redesign

2017-01-11 Thread Mike Snitzer
On Wed, Jan 11 2017 at 4:44am -0500, Hannes Reinecke wrote: > Hi all, > > I'd like to attend LSF/MM this year, and would like to discuss a > redesign of the multipath handling. > > With recent kernels we've got quite some functionality required for > multipathing already implemented, making so

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-11 Thread Hannes Reinecke
On 01/11/2017 05:26 PM, Bart Van Assche wrote: > On Wed, 2017-01-11 at 17:22 +0100, Hannes Reinecke wrote: >> On 01/11/2017 05:12 PM, h...@infradead.org wrote: >>> On Wed, Jan 11, 2017 at 04:08:31PM +, Bart Van Assche wrote: A typical Ethernet network adapter delays the generation of an >>

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-11 Thread Bart Van Assche
On Wed, 2017-01-11 at 17:22 +0100, Hannes Reinecke wrote: > On 01/11/2017 05:12 PM, h...@infradead.org wrote: > > On Wed, Jan 11, 2017 at 04:08:31PM +, Bart Van Assche wrote: > > > A typical Ethernet network adapter delays the generation of an > > > interrupt > > > after it has received a packe

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-11 Thread Hannes Reinecke
On 01/11/2017 05:12 PM, h...@infradead.org wrote: > On Wed, Jan 11, 2017 at 04:08:31PM +, Bart Van Assche wrote: >> A typical Ethernet network adapter delays the generation of an interrupt >> after it has received a packet. A typical block device or HBA does not delay >> the generation of an in

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-11 Thread Jens Axboe
On 01/11/2017 09:12 AM, h...@infradead.org wrote: > On Wed, Jan 11, 2017 at 04:08:31PM +, Bart Van Assche wrote: >> A typical Ethernet network adapter delays the generation of an interrupt >> after it has received a packet. A typical block device or HBA does not delay >> the generation of an in

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-11 Thread Johannes Thumshirn
On Wed, Jan 11, 2017 at 04:08:31PM +, Bart Van Assche wrote: [...] > A typical Ethernet network adapter delays the generation of an interrupt > after it has received a packet. A typical block device or HBA does not delay > the generation of an interrupt that reports an I/O completion. I think

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-11 Thread Bart Van Assche
On Wed, 2017-01-11 at 14:43 +0100, Johannes Thumshirn wrote: > I'd like to attend LSF/MM and would like to discuss polling for block > drivers. > > Currently there is blk-iopoll but it is neither as widely used as NAPI in > the networking field and accoring to Sagi's findings in [1] performance >

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-11 Thread h...@infradead.org
On Wed, Jan 11, 2017 at 04:08:31PM +, Bart Van Assche wrote: > A typical Ethernet network adapter delays the generation of an interrupt > after it has received a packet. A typical block device or HBA does not delay > the generation of an interrupt that reports an I/O completion. NVMe allows fo

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-11 Thread Hannes Reinecke
On 01/11/2017 04:07 PM, Jens Axboe wrote: > On 01/11/2017 06:43 AM, Johannes Thumshirn wrote: >> Hi all, >> >> I'd like to attend LSF/MM and would like to discuss polling for block >> drivers. >> >> Currently there is blk-iopoll but it is neither as widely used as NAPI in the >> networking field a

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-11 Thread Jens Axboe
On 01/11/2017 08:07 AM, Jens Axboe wrote: > On 01/11/2017 06:43 AM, Johannes Thumshirn wrote: >> Hi all, >> >> I'd like to attend LSF/MM and would like to discuss polling for block >> drivers. >> >> Currently there is blk-iopoll but it is neither as widely used as NAPI in the >> networking field a

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-11 Thread Jens Axboe
On 01/11/2017 06:43 AM, Johannes Thumshirn wrote: > Hi all, > > I'd like to attend LSF/MM and would like to discuss polling for block drivers. > > Currently there is blk-iopoll but it is neither as widely used as NAPI in the > networking field and accoring to Sagi's findings in [1] performance wi

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-11 Thread Hannes Reinecke
On 01/11/2017 02:43 PM, Johannes Thumshirn wrote: > Hi all, > > I'd like to attend LSF/MM and would like to discuss polling for block drivers. > > Currently there is blk-iopoll but it is neither as widely used as NAPI in the > networking field and accoring to Sagi's findings in [1] performance wi

[LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-11 Thread Johannes Thumshirn
Hi all, I'd like to attend LSF/MM and would like to discuss polling for block drivers. Currently there is blk-iopoll but it is neither as widely used as NAPI in the networking field and accoring to Sagi's findings in [1] performance with polling is not on par with IRQ usage. On LSF/MM I'd like t

[LSF/MM TOPIC][LSF/MM ATTEND] multipath redesign

2017-01-11 Thread Hannes Reinecke
Hi all, I'd like to attend LSF/MM this year, and would like to discuss a redesign of the multipath handling. With recent kernels we've got quite some functionality required for multipathing already implemented, making some design decisions of the original multipath-tools implementation quite poin

[LSF/MM TOPIC] [LSF/MM ATTEND] XCOPY

2016-12-09 Thread Ewan D. Milne
I'd like to attend LSF/MM this year, and I'd like to discuss XCOPY and what we need to do to get this functionality incorporated. Martin/Mikulas' patches have been around for a while, I think the last holdup was some request flag changes that have since been resolved. We do need to have a token-b

Re: [Lsf-pc] [LSF/MM ATTEND] Online Logical Head Depop and SMR disks chunked writepages

2016-03-01 Thread Christoph Hellwig
Any chance the two of you could occasionally trim the mails your quoting? -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [Lsf-pc] [LSF/MM ATTEND] Online Logical Head Depop and SMR disks chunked writepages

2016-03-01 Thread Jan Kara
> , "lsf...@lists.linuxfoundation.org" > > Subject: Re: [Lsf-pc] [LSF/MM ATTEND] Online Logical Head Depop and SMR > disks chunked writepages > > > >On Mon 29-02-16 02:02:16, Damien Le Moal wrote: > >> > >> >On Wed 24-02-16 01:

Re: [Lsf-pc] [LSF/MM ATTEND] Online Logical Head Depop and SMR disks chunked writepages

2016-02-29 Thread Damien Le Moal
From: Jan Kara Date: Monday, February 29, 2016 at 22:40 To: Damien Le Moal Cc: Jan Kara , "linux-bl...@vger.kernel.org" , Bart Van Assche , Matias Bjorling , "linux-scsi@vger.kernel.org" , "lsf...@lists.linuxfoundation.org" Subject: Re: [Lsf-pc]

Re: [LSF/MM TOPIC][LSF/MM ATTEND] LIO Target Synchronization with sysfs [resend]

2016-02-29 Thread Lee Duncan
On 02/27/2016 01:54 PM, Nicholas A. Bellinger wrote: > On Sat, 2016-02-27 at 08:19 -0800, Lee Duncan wrote: >> [Apologies for the resend.] >> >> I would like to attend LSF/MM this year. I would like to discuss >> problems I've had dealing with LIO targets and their inherent >> asynchronicity in sys

Re: [Lsf-pc] [LSF/MM ATTEND] Online Logical Head Depop and SMR disks chunked writepages

2016-02-29 Thread Jan Kara
On Mon 29-02-16 02:02:16, Damien Le Moal wrote: > > >On Wed 24-02-16 01:53:24, Damien Le Moal wrote: > >> > >> >On Tue 23-02-16 05:31:13, Damien Le Moal wrote: > >> >> > >> >> >On 02/22/16 18:56, Damien Le Moal wrote: > >> >> >> 2) Write back of dirty pages to SMR block devices: > >> >> >> > >>

Re: [Lsf-pc] [LSF/MM ATTEND] Online Logical Head Depop and SMR disks chunked writepages

2016-02-28 Thread Damien Le Moal
>On 02/29/2016 10:02 AM, Damien Le Moal wrote: >> >>> On Wed 24-02-16 01:53:24, Damien Le Moal wrote: > On Tue 23-02-16 05:31:13, Damien Le Moal wrote: >> >>> On 02/22/16 18:56, Damien Le Moal wrote: 2) Write back of dirty pages to SMR block devices: Di

Re: [Lsf-pc] [LSF/MM ATTEND] Online Logical Head Depop and SMR disks chunked writepages

2016-02-28 Thread Hannes Reinecke
On 02/29/2016 10:02 AM, Damien Le Moal wrote: > >> On Wed 24-02-16 01:53:24, Damien Le Moal wrote: >>> On Tue 23-02-16 05:31:13, Damien Le Moal wrote: > >> On 02/22/16 18:56, Damien Le Moal wrote: >>> 2) Write back of dirty pages to SMR block devices: >>> >>> Dirty pages o

Re: [Lsf-pc] [LSF/MM ATTEND] Online Logical Head Depop and SMR disks chunked writepages

2016-02-28 Thread Damien Le Moal
>On Wed 24-02-16 01:53:24, Damien Le Moal wrote: >> >> >On Tue 23-02-16 05:31:13, Damien Le Moal wrote: >> >> >> >> >On 02/22/16 18:56, Damien Le Moal wrote: >> >> >> 2) Write back of dirty pages to SMR block devices: >> >> >> >> >> >> Dirty pages of a block device inode are currently processed

Re: [Lsf-pc] [LSF/MM TOPIC][LSF/MM ATTEND] blk-mq and I/O scheduling

2016-02-28 Thread Hannes Reinecke
On 02/29/2016 12:13 AM, Christoph Hellwig wrote: > I was still hoping we'd get your slicing patches in ASAP at least. > But there are couple more topics here, so I think it would > still be useful in that case. > Actually I have been pondering an alternative approach. In most cases submit_bio() /

Re: [Lsf-pc] [LSF/MM TOPIC][LSF/MM ATTEND] blk-mq and I/O scheduling

2016-02-28 Thread Christoph Hellwig
I was still hoping we'd get your slicing patches in ASAP at least. But there are couple more topics here, so I think it would still be useful in that case. -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo

Re: [LSF/MM TOPIC][LSF/MM ATTEND] LIO Target Synchronization with sysfs [resend]

2016-02-27 Thread Nicholas A. Bellinger
On Sat, 2016-02-27 at 08:19 -0800, Lee Duncan wrote: > [Apologies for the resend.] > > I would like to attend LSF/MM this year. I would like to discuss > problems I've had dealing with LIO targets and their inherent > asynchronicity in sysfs. I believe that with judicious use of kernel > events we

[LSF/MM TOPIC][LSF/MM ATTEND] LIO Target Synchronization with sysfs [resend]

2016-02-27 Thread Lee Duncan
[Apologies for the resend.] I would like to attend LSF/MM this year. I would like to discuss problems I've had dealing with LIO targets and their inherent asynchronicity in sysfs. I believe that with judicious use of kernel events we can help user space handle target changes more cleanly. As for

[LSF/MM TOPIC][LSF/MM ATTEND] blk-mq and I/O scheduling

2016-02-26 Thread Andreas Herrmann
Hi, I'd like to participate in LSF/MM and would like to present/discuss ideas for introducing I/O scheduling support to blk-mq. Motiviation for this is to be able use scsi-mq even on systems that have slow (spinning) devices attached to the SCSI stack. I think the presentation/discussion should

Re: [Lsf-pc] [LSF/MM ATTEND] Online Logical Head Depop and SMR disks chunked writepages

2016-02-24 Thread Jan Kara
On Wed 24-02-16 01:53:24, Damien Le Moal wrote: > > >On Tue 23-02-16 05:31:13, Damien Le Moal wrote: > >> > >> >On 02/22/16 18:56, Damien Le Moal wrote: > >> >> 2) Write back of dirty pages to SMR block devices: > >> >> > >> >> Dirty pages of a block device inode are currently processed using the

Re: [Lsf-pc] [LSF/MM ATTEND] Online Logical Head Depop and SMR disks chunked writepages

2016-02-23 Thread Damien Le Moal
>On Tue 23-02-16 05:31:13, Damien Le Moal wrote: >> >> >On 02/22/16 18:56, Damien Le Moal wrote: >> >> 2) Write back of dirty pages to SMR block devices: >> >> >> >> Dirty pages of a block device inode are currently processed using the >> >> generic_writepages function, which can be executed simu

Re: [Lsf-pc] [LSF/MM ATTEND] Online Logical Head Depop and SMR disks chunked writepages

2016-02-23 Thread Jan Kara
On Tue 23-02-16 05:31:13, Damien Le Moal wrote: > > >On 02/22/16 18:56, Damien Le Moal wrote: > >> 2) Write back of dirty pages to SMR block devices: > >> > >> Dirty pages of a block device inode are currently processed using the > >> generic_writepages function, which can be executed simultaneous

Re: [LSF/MM ATTEND] Online Logical Head Depop and SMR disks chunked writepages

2016-02-22 Thread Damien Le Moal
>On 02/22/16 18:56, Damien Le Moal wrote: >> 2) Write back of dirty pages to SMR block devices: >> >> Dirty pages of a block device inode are currently processed using the >> generic_writepages function, which can be executed simultaneously >> by multiple contexts (e.g sync, fsync, msync, sync_fil

Re: [LSF/MM ATTEND] Online Logical Head Depop and SMR disks chunked writepages

2016-02-22 Thread Bart Van Assche
On 02/22/16 18:56, Damien Le Moal wrote: 2) Write back of dirty pages to SMR block devices: Dirty pages of a block device inode are currently processed using the generic_writepages function, which can be executed simultaneously by multiple contexts (e.g sync, fsync, msync, sync_file_range, etc).

[LSF/MM ATTEND] Online Logical Head Depop and SMR disks chunked writepages

2016-02-22 Thread Damien Le Moal
Hello, I would like to attend LSF/MM 2016 to discuss the following topics. 1) Online Logical Head Depop Some disk drives available on the market already provide a "logical depop" function which allows a system to decommission a defective disk head, reformat the disk and continue using this same

[LSF/MM ATTEND][LSF/MM TOPIC] SMR/Zoned devices IO

2016-01-29 Thread Adrian Palmer
Greetings: I would like to attend LSF/MM to continue discussion related to SMR. Besides the SMRFFS, we've been developing a zoned device mapper (ZDM). I would like to gather feedback and direction. I can provide a quick explanation of the architecture. This will enable existing software to work

[LSF/MM ATTEND] dm-multipath, nvme, (remote) pmem and friends

2016-01-28 Thread Sagi Grimberg
Hi, I'd like to attend LSF/MM 2016. I've been working on scsi rdma transports and the target stack for some time now. Lately I've been looking at nvme as well and I think I can contribute to the dm-multipath discussions in the context of nvme and blk-mq performance. If we plan to talk about nvme

[LSF/MM ATTEND]

2015-02-11 Thread Chad Dupuis
I'd like to attend LSF. I'm currently helping to maintain the qla2xxx and bnx2fc drivers. I have been developing for low level Linux SCSI drivers for the last 5 years and have been working with the Linux storage stack for over 10. Speicially, I'd like to be included in the discussions about

Re: [LSF/MM ATTEND] [LSF/MM TOPIC]

2015-01-10 Thread Matias Bjorling
On 01/09/2015 05:14 PM, Ewan Milne wrote: > I'd like to attend LSF -- I am responsible for maintaining the SCSI > subsystem at Red Hat, and in addition to resolving issues for customers > and partners, I've been participating in upstream development for the > past couple of years. I have an extens

Re: [LSF/MM ATTEND] [LSF/MM TOPIC]

2015-01-10 Thread Hannes Reinecke
On 01/09/2015 05:14 PM, Ewan Milne wrote: > I'd like to attend LSF -- I am responsible for maintaining the SCSI > subsystem at Red Hat, and in addition to resolving issues for customers > and partners, I've been participating in upstream development for the > past couple of years. I have an extens

[LSF/MM ATTEND] [LSF/MM TOPIC]

2015-01-09 Thread Ewan Milne
I'd like to attend LSF -- I am responsible for maintaining the SCSI subsystem at Red Hat, and in addition to resolving issues for customers and partners, I've been participating in upstream development for the past couple of years. I have an extensive background in SCSI and OS development, includi

Re: [LSF/MM ATTEND] iSCSI/iSER MQ + EXTENDED_COPY host support

2015-01-08 Thread Andy Grover
On 01/08/2015 03:11 PM, Douglas Gilbert wrote: Andy, XCOPY is huge and keeps getting bigger (apart from the recent trimming of the "LID1" generation mentioned above). Most SCSI commands are sent to the LU involved, but for EXTENDED COPY command itself, should it be sent to the source or destinati

Re: [LSF/MM ATTEND] iSCSI/iSER MQ + EXTENDED_COPY host support

2015-01-08 Thread Douglas Gilbert
On 15-01-08 01:58 PM, Andy Grover wrote: On 01/07/2015 02:11 PM, Douglas Gilbert wrote: T10 have now dropped the LID1 and LID4 stuff (its the length of the LIST IDENTIFIER field: 1 byte or 4 bytes) by obsoleting all LID1 variants in SPC-5 revision 01. So the LID1 variants are now gone and the LI

Re: [LSF/MM ATTEND] iSCSI/iSER MQ + EXTENDED_COPY host support

2015-01-08 Thread Nicholas A. Bellinger
On Thu, 2015-01-08 at 10:58 -0800, Andy Grover wrote: > On 01/07/2015 02:11 PM, Douglas Gilbert wrote: > > T10 have now dropped the LID1 and LID4 stuff (its the length > > of the LIST IDENTIFIER field: 1 byte or 4 bytes) by obsoleting > > all LID1 variants in SPC-5 revision 01. So the LID1 variants

Re: [LSF/MM ATTEND] iSCSI/iSER MQ + EXTENDED_COPY host support

2015-01-08 Thread Andy Grover
On 01/07/2015 02:11 PM, Douglas Gilbert wrote: T10 have now dropped the LID1 and LID4 stuff (its the length of the LIST IDENTIFIER field: 1 byte or 4 bytes) by obsoleting all LID1 variants in SPC-5 revision 01. So the LID1 variants are now gone and the LID4 appendage is dropped from the remaining

Re: [dm-devel] [LSF/MM ATTEND] discuss blk-mq related to DM-multipath and status of XCOPY

2015-01-08 Thread Martin K. Petersen
> "Hannes" == Hannes Reinecke writes: Hannes> But the array might prioritize 'normal' I/O requests, and treat Hannes> XCOPY with a lower priority. So given enough load XCOPY might Hannes> actually be slower than a normal copy ... Yes, but may have lower impact on concurrent I/O. -- Martin

Re: [dm-devel] [LSF/MM ATTEND] discuss blk-mq related to DM-multipath and status of XCOPY

2015-01-07 Thread Hannes Reinecke
On 01/08/2015 12:39 AM, Martin K. Petersen wrote: >> "Hannes" == Hannes Reinecke writes: > > Hannes> Not quite. XCOPY is optional, and the system works well without > Hannes> it. So the exception handling would need to copy things by hand > Hannes> to avoid regressions. > > Or defer to user

Re: [dm-devel] [LSF/MM ATTEND] discuss blk-mq related to DM-multipath and status of XCOPY

2015-01-07 Thread Martin K. Petersen
> "Hannes" == Hannes Reinecke writes: Hannes> Not quite. XCOPY is optional, and the system works well without Hannes> it. So the exception handling would need to copy things by hand Hannes> to avoid regressions. Or defer to user space. But it's really no different from how we do WRITE SAME

Re: [LSF/MM ATTEND] iSCSI/iSER MQ + EXTENDED_COPY host support

2015-01-07 Thread Douglas Gilbert
On 15-01-07 04:21 PM, Nicholas A. Bellinger wrote: Hi LSF-PC folks, I'd like to attend LSF/MM 2015 this year in Boston. The topics I'm interested in are iSCSI/iSER multi-queue (MC/S) support within the open-iscsi initiator, and pushing forward the pieces necessary for EXTENDED_COPY host side su

[LSF/MM ATTEND] iSCSI/iSER MQ + EXTENDED_COPY host support

2015-01-07 Thread Nicholas A. Bellinger
Hi LSF-PC folks, I'd like to attend LSF/MM 2015 this year in Boston. The topics I'm interested in are iSCSI/iSER multi-queue (MC/S) support within the open-iscsi initiator, and pushing forward the pieces necessary for EXTENDED_COPY host side support so we can finally begin to utilize copy-offload

Re: [LSF/MM ATTEND] discuss blk-mq related to DM-multipath and status of XCOPY

2015-01-07 Thread Nicholas A. Bellinger
On Sun, 2015-01-04 at 12:16 -0500, Mike Snitzer wrote: > Hi, > > I'd like to attend LSF (and the first day of Vault). > > As a DM maintainer I'm open to discussing anything DM related. In > particular I'd like to at least have hallway track discussions with > key people interested in DM multipat

Re: [dm-devel] [LSF/MM ATTEND] discuss blk-mq related to DM-multipath and status of XCOPY

2015-01-07 Thread Hannes Reinecke
On 01/07/2015 12:55 AM, Martin K. Petersen wrote: >> "Hannes" == Hannes Reinecke writes: > > Hannes> Yep. That definitely needs to be discussed. Especially we'd > Hannes> need to discuss how to handle exceptions, seeing that XCOPY > Hannes> might fail basically at any time. > > Like any S

Re: [dm-devel] [LSF/MM ATTEND] discuss blk-mq related to DM-multipath and status of XCOPY

2015-01-06 Thread Martin K. Petersen
> "Hannes" == Hannes Reinecke writes: Hannes> Yep. That definitely needs to be discussed. Especially we'd Hannes> need to discuss how to handle exceptions, seeing that XCOPY Hannes> might fail basically at any time. Like any SCSI command :) -- Martin K. Petersen Oracle Linux Engine

Re: [LSF/MM ATTEND] discuss blk-mq related to DM-multipath and status of XCOPY

2015-01-06 Thread Martin K. Petersen
> "Mike" == Mike Snitzer writes: Mike> Another topic we need to come to terms with is the state of XCOPY Mike> (whether the initial approach needs further work, etc) -- Mikulas Mike> Patocka reworked aspects of Martin's initial approach but it Mike> hasn't progressed upstream: Yeah, I'd like

Re: [dm-devel] [LSF/MM ATTEND] discuss blk-mq related to DM-multipath and status of XCOPY

2015-01-05 Thread Hannes Reinecke
On 01/04/2015 06:16 PM, Mike Snitzer wrote: > Hi, > > I'd like to attend LSF (and the first day of Vault). > > As a DM maintainer I'm open to discussing anything DM related. In > particular I'd like to at least have hallway track discussions with > key people interested in DM multipath support f

[LSF/MM ATTEND] discuss blk-mq related to DM-multipath and status of XCOPY

2015-01-04 Thread Mike Snitzer
Hi, I'd like to attend LSF (and the first day of Vault). As a DM maintainer I'm open to discussing anything DM related. In particular I'd like to at least have hallway track discussions with key people interested in DM multipath support for blk-mq devices. Keith Busch and I have gone ahead and i

  1   2   >