I am interested in attending to help discuss all things storage.
In particular, I am very interested in the "Patch Submission process and
Handling Internal Conflict" proposed by James, and in "Improving
Asynchronous SCSI Disk Probing" proposed by Bart.
Lastly, I am also interested in discussing b
On Tue, Jan 17, 2017 at 05:45:53PM +0200, Sagi Grimberg wrote:
>
> >--
> >[1]
> >queue = b'nvme0q1'
> > usecs : count distribution
> > 0 -> 1 : 7310 ||
> > 2 -> 3 : 11 | |
> > 4 -> 7
On 01/18/2017 06:02 AM, Hannes Reinecke wrote:
> On 01/17/2017 05:50 PM, Sagi Grimberg wrote:
>>
>>> So it looks like we are super not efficient because most of the
>>> times we catch 1
>>> completion per interrupt and the whole point is that we need to find
>>> more! This fio
>>>
On 01/19/2017 11:57 AM, Ming Lei wrote:
> On Wed, Jan 11, 2017 at 11:07 PM, Jens Axboe wrote:
>> On 01/11/2017 06:43 AM, Johannes Thumshirn wrote:
>>> Hi all,
>>>
>>> I'd like to attend LSF/MM and would like to discuss polling for block
>>> drivers.
>>>
>>> Currently there is blk-iopoll but it is
On Wed, Jan 11, 2017 at 11:07 PM, Jens Axboe wrote:
> On 01/11/2017 06:43 AM, Johannes Thumshirn wrote:
>> Hi all,
>>
>> I'd like to attend LSF/MM and would like to discuss polling for block
>> drivers.
>>
>> Currently there is blk-iopoll but it is neither as widely used as NAPI in the
>> network
Christoph suggest to me once that we can take a hybrid
approach where we consume a small amount of completions (say 4)
right away from the interrupt handler and if we have more
we schedule irq-poll to reap the rest. But back then it
didn't work better which is not aligned with my observations
that
On Thu, Jan 19, 2017 at 10:23:28AM +0200, Sagi Grimberg wrote:
> Christoph suggest to me once that we can take a hybrid
> approach where we consume a small amount of completions (say 4)
> right away from the interrupt handler and if we have more
> we schedule irq-poll to reap the rest. But back the
On Thu, Jan 19, 2017 at 10:12:17AM +0200, Sagi Grimberg wrote:
>
> >>>I think you missed:
> >>>http://git.infradead.org/nvme.git/commit/49c91e3e09dc3c9dd1718df85112a8cce3ab7007
> >>
> >>I indeed did, thanks.
> >>
> >But it doesn't help.
> >
> >We're still having to wait for the first interrupt, an
I think you missed:
http://git.infradead.org/nvme.git/commit/49c91e3e09dc3c9dd1718df85112a8cce3ab7007
I indeed did, thanks.
But it doesn't help.
We're still having to wait for the first interrupt, and if we're really
fast that's the only completion we have to process.
Try this:
diff --gi
On 01/18/2017 04:16 PM, Johannes Thumshirn wrote:
> On Wed, Jan 18, 2017 at 05:14:36PM +0200, Sagi Grimberg wrote:
>>
>>> Hannes just spotted this:
>>> static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
>>> const struct blk_mq_queue_data *bd)
>>> {
>>> [...]
>>>__n
On Wed, Jan 18, 2017 at 5:40 PM, Sagi Grimberg wrote:
>
>> Your report provided this stats with one-completion dominance for the
>> single-threaded case. Does it also hold if you run multiple fio
>> threads per core?
>
>
> It's useless to run more threads on that core, it's already fully
> utilize
On Wed, Jan 18, 2017 at 05:14:36PM +0200, Sagi Grimberg wrote:
>
> >Hannes just spotted this:
> >static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
> > const struct blk_mq_queue_data *bd)
> >{
> >[...]
> >__nvme_submit_cmd(nvmeq, &cmnd);
> >nvme_process_cq
Hannes just spotted this:
static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
const struct blk_mq_queue_data *bd)
{
[...]
__nvme_submit_cmd(nvmeq, &cmnd);
nvme_process_cq(nvmeq);
spin_unlock_irq(&nvmeq->q_lock);
return BLK_MQ_RQ_QUEUE_OK;
On Wed, Jan 18, 2017 at 04:27:24PM +0200, Sagi Grimberg wrote:
>
> >So what you say is you saw a consomed == 1 [1] most of the time?
> >
> >[1] from
> >http://git.infradead.org/nvme.git/commitdiff/eed5a9d925c59e43980047059fde29e3aa0b7836
>
> Exactly. By processing 1 completion per interrupt it m
Your report provided this stats with one-completion dominance for the
single-threaded case. Does it also hold if you run multiple fio
threads per core?
It's useless to run more threads on that core, it's already fully
utilized. That single threads is already posting a fair amount of
submission
On Wed, Jan 18, 2017 at 5:27 PM, Sagi Grimberg wrote:
>
>> So what you say is you saw a consomed == 1 [1] most of the time?
>>
>> [1] from
>> http://git.infradead.org/nvme.git/commitdiff/eed5a9d925c59e43980047059fde29e3aa0b7836
>
>
> Exactly. By processing 1 completion per interrupt it makes perfe
So what you say is you saw a consomed == 1 [1] most of the time?
[1] from
http://git.infradead.org/nvme.git/commitdiff/eed5a9d925c59e43980047059fde29e3aa0b7836
Exactly. By processing 1 completion per interrupt it makes perfect sense
why this performs poorly, it's not worth paying the soft-ir
On 01/17/2017 05:50 PM, Sagi Grimberg wrote:
>
>> So it looks like we are super not efficient because most of the
>> times we catch 1
>> completion per interrupt and the whole point is that we need to find
>> more! This fio
>> is single threaded with QD=32 so I'd expect that we
On Tue, Jan 17, 2017 at 06:38:43PM +0200, Sagi Grimberg wrote:
>
> >Just for the record, all tests you've run are with the upper irq_poll_budget
> >of
> >256 [1]?
>
> Yes, but that's the point, I never ever reach this budget because
> I'm only processing 1-2 completions per interrupt.
>
> >We (
So it looks like we are super not efficient because most of the
times we catch 1
completion per interrupt and the whole point is that we need to find
more! This fio
is single threaded with QD=32 so I'd expect that we be somewhere in
8-31 almost all
the time... I also
On Tue, Jan 17, 2017 at 06:15:43PM +0200, Sagi Grimberg wrote:
> Oh, and the current code that was tested can be found at:
>
> git://git.infradead.org/nvme.git nvme-irqpoll
Just for the record, all tests you've run are with the upper irq_poll_budget of
256 [1]?
We (Hannes and me) recently stumbe
Just for the record, all tests you've run are with the upper irq_poll_budget of
256 [1]?
Yes, but that's the point, I never ever reach this budget because
I'm only processing 1-2 completions per interrupt.
We (Hannes and me) recently stumbed accross this when trying to poll for more
than 256
Oh, and the current code that was tested can be found at:
git://git.infradead.org/nvme.git nvme-irqpoll
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Hey, so I made some initial analysis of whats going on with
irq-poll.
First, I sampled how much time it takes before we
get the interrupt in nvme_irq and the initial visit
to nvme_irqpoll_handler. I ran a single threaded fio
with QD=32 of 4K reads. This is two displays of a
histogram of the laten
--
[1]
queue = b'nvme0q1'
usecs : count distribution
0 -> 1 : 7310 ||
2 -> 3 : 11 | |
4 -> 7 : 10 | |
8 -> 15 : 20 | |
16
I'd like to attend LSF/MM mostly to participate in the storage topics
discussion, but also because containers is starting to need some
features from the VFS. Hopefully the container uid shifting is mostly
sorted out by the superblock user namespace, and I should have patches
to demonstrate this in
On 01/13/2017 05:07 PM, Mike Snitzer wrote:
> On Fri, Jan 13 2017 at 10:56am -0500,
> Hannes Reinecke wrote:
>
>> On 01/12/2017 06:29 PM, Benjamin Marzinski wrote:
[ .. ]
>>> While I've daydreamed of rewriting the multipath tools multiple times,
>>> and having nothing aginst you doing it in conce
On Fri, Jan 13 2017 at 10:56am -0500,
Hannes Reinecke wrote:
> On 01/12/2017 06:29 PM, Benjamin Marzinski wrote:
> > On Thu, Jan 12, 2017 at 09:27:40AM +0100, Hannes Reinecke wrote:
> >> On 01/11/2017 11:23 PM, Mike Snitzer wrote:
> >>> On Wed, Jan 11 2017 at 4:44am -0500,
> >>> Hannes Reinecke
On 01/12/2017 06:29 PM, Benjamin Marzinski wrote:
> On Thu, Jan 12, 2017 at 09:27:40AM +0100, Hannes Reinecke wrote:
>> On 01/11/2017 11:23 PM, Mike Snitzer wrote:
>>> On Wed, Jan 11 2017 at 4:44am -0500,
>>> Hannes Reinecke wrote:
>>>
Hi all,
I'd like to attend LSF/MM this year, a
On Wed, Jan 11, 2017 at 08:13:02AM -0700, Jens Axboe wrote:
> On 01/11/2017 08:07 AM, Jens Axboe wrote:
> > On 01/11/2017 06:43 AM, Johannes Thumshirn wrote:
> >> Hi all,
> >>
> >> I'd like to attend LSF/MM and would like to discuss polling for block
> >> drivers.
> >>
> >> Currently there is blk-
On Thu, 2017-01-12 at 10:41 +0200, Sagi Grimberg wrote:
> First, when the nvme device fires an interrupt, the driver consumes
> the completion(s) from the interrupt (usually there will be some more
> completions waiting in the cq by the time the host start processing it).
> With irq-poll, we disabl
On Thu, Jan 12, 2017 at 04:41:00PM +0200, Sagi Grimberg wrote:
>
> >>**Note: when I ran multiple threads on more cpus the performance
> >>degradation phenomenon disappeared, but I tested on a VM with
> >>qemu emulation backed by null_blk so I figured I had some other
> >>bottleneck somewhere (that
On Thu, Jan 12, 2017 at 09:27:40AM +0100, Hannes Reinecke wrote:
> On 01/11/2017 11:23 PM, Mike Snitzer wrote:
> >On Wed, Jan 11 2017 at 4:44am -0500,
> >Hannes Reinecke wrote:
> >
> >>Hi all,
> >>
> >>I'd like to attend LSF/MM this year, and would like to discuss a
> >>redesign of the multipath
**Note: when I ran multiple threads on more cpus the performance
degradation phenomenon disappeared, but I tested on a VM with
qemu emulation backed by null_blk so I figured I had some other
bottleneck somewhere (that's why I asked for some more testing).
That could be because of the vmexits a
On Thu, Jan 12, 2017 at 01:44:05PM +0200, Sagi Grimberg wrote:
[...]
> Its pretty basic:
> --
> [global]
> group_reporting
> cpus_allowed=0
> cpus_allowed_policy=split
> rw=randrw
> bs=4k
> numjobs=4
> iodepth=32
> runtime=60
> time_based
> loops=1
> ioengine=libaio
> direct=1
> invalidate=1
> rand
I agree with Jens that we'll need some analysis if we want the
discussion to be affective, and I can spend some time this if I
can find volunteers with high-end nvme devices (I only have access
to client nvme devices.
I have a P3700 but somehow burned the FW. Let me see if I can bring it back
On Thu, Jan 12, 2017 at 10:23:47AM +0200, Sagi Grimberg wrote:
>
> >>>Hi all,
> >>>
> >>>I'd like to attend LSF/MM and would like to discuss polling for block
> >>>drivers.
> >>>
> >>>Currently there is blk-iopoll but it is neither as widely used as NAPI in
> >>>the
> >>>networking field and acc
A typical Ethernet network adapter delays the generation of an
interrupt
after it has received a packet. A typical block device or HBA does not
delay
the generation of an interrupt that reports an I/O completion.
>>>
>>> NVMe allows for configurable interrupt coalescing, as
I'd like to attend LSF/MM and would like to discuss polling for block
drivers.
Currently there is blk-iopoll but it is neither as widely used as NAPI in
the networking field and accoring to Sagi's findings in [1] performance
with polling is not on par with IRQ usage.
On LSF/MM I'd like to whet
On 01/11/2017 11:23 PM, Mike Snitzer wrote:
On Wed, Jan 11 2017 at 4:44am -0500,
Hannes Reinecke wrote:
Hi all,
I'd like to attend LSF/MM this year, and would like to discuss a
redesign of the multipath handling.
With recent kernels we've got quite some functionality required for
multipathi
Hi all,
I'd like to attend LSF/MM and would like to discuss polling for block drivers.
Currently there is blk-iopoll but it is neither as widely used as NAPI in the
networking field and accoring to Sagi's findings in [1] performance with
polling is not on par with IRQ usage.
On LSF/MM I'd lik
>
> This is a separate topic. The initial proposal is for polling for
> interrupt mitigation, you are talking about polling in the context of
> polling for completion of an IO.
>
> We can definitely talk about this form of polling as well, but it should
> be a separate topic and probably proposed i
Hi
I'd like to discuss the ongoing work in the kernel to enable high priority
IO via polling for completion in the blk-mq subsystem.
Given that iopoll only really makes sense for low-latency, low queue depth
environments (i.e. down below 10-20us) I'd like to discuss which drivers
we think will ne
>>
>> I'd like to attend LSF/MM and would like to discuss polling for block
>> drivers.
>>
>> Currently there is blk-iopoll but it is neither as widely used as NAPI
>> in the networking field and accoring to Sagi's findings in [1]
>> performance with polling is not on par with IRQ usage.
>>
>> On L
On 01/11/2017 09:36 PM, Stephen Bates wrote:
>>>
>>> I'd like to attend LSF/MM and would like to discuss polling for block
>>> drivers.
>>>
>>> Currently there is blk-iopoll but it is neither as widely used as NAPI
>>> in the networking field and accoring to Sagi's findings in [1]
>>> performance w
On Wed, Jan 11 2017 at 4:44am -0500,
Hannes Reinecke wrote:
> Hi all,
>
> I'd like to attend LSF/MM this year, and would like to discuss a
> redesign of the multipath handling.
>
> With recent kernels we've got quite some functionality required for
> multipathing already implemented, making so
On 01/11/2017 05:26 PM, Bart Van Assche wrote:
> On Wed, 2017-01-11 at 17:22 +0100, Hannes Reinecke wrote:
>> On 01/11/2017 05:12 PM, h...@infradead.org wrote:
>>> On Wed, Jan 11, 2017 at 04:08:31PM +, Bart Van Assche wrote:
A typical Ethernet network adapter delays the generation of an
>>
On Wed, 2017-01-11 at 17:22 +0100, Hannes Reinecke wrote:
> On 01/11/2017 05:12 PM, h...@infradead.org wrote:
> > On Wed, Jan 11, 2017 at 04:08:31PM +, Bart Van Assche wrote:
> > > A typical Ethernet network adapter delays the generation of an
> > > interrupt
> > > after it has received a packe
On 01/11/2017 05:12 PM, h...@infradead.org wrote:
> On Wed, Jan 11, 2017 at 04:08:31PM +, Bart Van Assche wrote:
>> A typical Ethernet network adapter delays the generation of an interrupt
>> after it has received a packet. A typical block device or HBA does not delay
>> the generation of an in
On 01/11/2017 09:12 AM, h...@infradead.org wrote:
> On Wed, Jan 11, 2017 at 04:08:31PM +, Bart Van Assche wrote:
>> A typical Ethernet network adapter delays the generation of an interrupt
>> after it has received a packet. A typical block device or HBA does not delay
>> the generation of an in
On Wed, Jan 11, 2017 at 04:08:31PM +, Bart Van Assche wrote:
[...]
> A typical Ethernet network adapter delays the generation of an interrupt
> after it has received a packet. A typical block device or HBA does not delay
> the generation of an interrupt that reports an I/O completion. I think
On Wed, 2017-01-11 at 14:43 +0100, Johannes Thumshirn wrote:
> I'd like to attend LSF/MM and would like to discuss polling for block
> drivers.
>
> Currently there is blk-iopoll but it is neither as widely used as NAPI in
> the networking field and accoring to Sagi's findings in [1] performance
>
On Wed, Jan 11, 2017 at 04:08:31PM +, Bart Van Assche wrote:
> A typical Ethernet network adapter delays the generation of an interrupt
> after it has received a packet. A typical block device or HBA does not delay
> the generation of an interrupt that reports an I/O completion.
NVMe allows fo
On 01/11/2017 04:07 PM, Jens Axboe wrote:
> On 01/11/2017 06:43 AM, Johannes Thumshirn wrote:
>> Hi all,
>>
>> I'd like to attend LSF/MM and would like to discuss polling for block
>> drivers.
>>
>> Currently there is blk-iopoll but it is neither as widely used as NAPI in the
>> networking field a
On 01/11/2017 08:07 AM, Jens Axboe wrote:
> On 01/11/2017 06:43 AM, Johannes Thumshirn wrote:
>> Hi all,
>>
>> I'd like to attend LSF/MM and would like to discuss polling for block
>> drivers.
>>
>> Currently there is blk-iopoll but it is neither as widely used as NAPI in the
>> networking field a
On 01/11/2017 06:43 AM, Johannes Thumshirn wrote:
> Hi all,
>
> I'd like to attend LSF/MM and would like to discuss polling for block drivers.
>
> Currently there is blk-iopoll but it is neither as widely used as NAPI in the
> networking field and accoring to Sagi's findings in [1] performance wi
On 01/11/2017 02:43 PM, Johannes Thumshirn wrote:
> Hi all,
>
> I'd like to attend LSF/MM and would like to discuss polling for block drivers.
>
> Currently there is blk-iopoll but it is neither as widely used as NAPI in the
> networking field and accoring to Sagi's findings in [1] performance wi
Hi all,
I'd like to attend LSF/MM and would like to discuss polling for block drivers.
Currently there is blk-iopoll but it is neither as widely used as NAPI in the
networking field and accoring to Sagi's findings in [1] performance with
polling is not on par with IRQ usage.
On LSF/MM I'd like t
Hi all,
I'd like to attend LSF/MM this year, and would like to discuss a
redesign of the multipath handling.
With recent kernels we've got quite some functionality required for
multipathing already implemented, making some design decisions of the
original multipath-tools implementation quite poin
I'd like to attend LSF/MM this year, and I'd like to discuss
XCOPY and what we need to do to get this functionality
incorporated. Martin/Mikulas' patches have been around for
a while, I think the last holdup was some request flag changes
that have since been resolved.
We do need to have a token-b
Any chance the two of you could occasionally trim the mails your quoting?
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
> , "lsf...@lists.linuxfoundation.org"
>
> Subject: Re: [Lsf-pc] [LSF/MM ATTEND] Online Logical Head Depop and SMR
> disks chunked writepages
>
>
> >On Mon 29-02-16 02:02:16, Damien Le Moal wrote:
> >>
> >> >On Wed 24-02-16 01:
From: Jan Kara
Date: Monday, February 29, 2016 at 22:40
To: Damien Le Moal
Cc: Jan Kara , "linux-bl...@vger.kernel.org"
, Bart Van Assche ,
Matias Bjorling , "linux-scsi@vger.kernel.org"
, "lsf...@lists.linuxfoundation.org"
Subject: Re: [Lsf-pc]
On 02/27/2016 01:54 PM, Nicholas A. Bellinger wrote:
> On Sat, 2016-02-27 at 08:19 -0800, Lee Duncan wrote:
>> [Apologies for the resend.]
>>
>> I would like to attend LSF/MM this year. I would like to discuss
>> problems I've had dealing with LIO targets and their inherent
>> asynchronicity in sys
On Mon 29-02-16 02:02:16, Damien Le Moal wrote:
>
> >On Wed 24-02-16 01:53:24, Damien Le Moal wrote:
> >>
> >> >On Tue 23-02-16 05:31:13, Damien Le Moal wrote:
> >> >>
> >> >> >On 02/22/16 18:56, Damien Le Moal wrote:
> >> >> >> 2) Write back of dirty pages to SMR block devices:
> >> >> >>
> >>
>On 02/29/2016 10:02 AM, Damien Le Moal wrote:
>>
>>> On Wed 24-02-16 01:53:24, Damien Le Moal wrote:
> On Tue 23-02-16 05:31:13, Damien Le Moal wrote:
>>
>>> On 02/22/16 18:56, Damien Le Moal wrote:
2) Write back of dirty pages to SMR block devices:
Di
On 02/29/2016 10:02 AM, Damien Le Moal wrote:
>
>> On Wed 24-02-16 01:53:24, Damien Le Moal wrote:
>>>
On Tue 23-02-16 05:31:13, Damien Le Moal wrote:
>
>> On 02/22/16 18:56, Damien Le Moal wrote:
>>> 2) Write back of dirty pages to SMR block devices:
>>>
>>> Dirty pages o
>On Wed 24-02-16 01:53:24, Damien Le Moal wrote:
>>
>> >On Tue 23-02-16 05:31:13, Damien Le Moal wrote:
>> >>
>> >> >On 02/22/16 18:56, Damien Le Moal wrote:
>> >> >> 2) Write back of dirty pages to SMR block devices:
>> >> >>
>> >> >> Dirty pages of a block device inode are currently processed
On 02/29/2016 12:13 AM, Christoph Hellwig wrote:
> I was still hoping we'd get your slicing patches in ASAP at least.
> But there are couple more topics here, so I think it would
> still be useful in that case.
>
Actually I have been pondering an alternative approach.
In most cases submit_bio() /
I was still hoping we'd get your slicing patches in ASAP at least.
But there are couple more topics here, so I think it would
still be useful in that case.
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo
On Sat, 2016-02-27 at 08:19 -0800, Lee Duncan wrote:
> [Apologies for the resend.]
>
> I would like to attend LSF/MM this year. I would like to discuss
> problems I've had dealing with LIO targets and their inherent
> asynchronicity in sysfs. I believe that with judicious use of kernel
> events we
[Apologies for the resend.]
I would like to attend LSF/MM this year. I would like to discuss
problems I've had dealing with LIO targets and their inherent
asynchronicity in sysfs. I believe that with judicious use of kernel
events we can help user space handle target changes more cleanly.
As for
Hi,
I'd like to participate in LSF/MM and would like to present/discuss
ideas for introducing I/O scheduling support to blk-mq.
Motiviation for this is to be able use scsi-mq even on systems that
have slow (spinning) devices attached to the SCSI stack.
I think the presentation/discussion should
On Wed 24-02-16 01:53:24, Damien Le Moal wrote:
>
> >On Tue 23-02-16 05:31:13, Damien Le Moal wrote:
> >>
> >> >On 02/22/16 18:56, Damien Le Moal wrote:
> >> >> 2) Write back of dirty pages to SMR block devices:
> >> >>
> >> >> Dirty pages of a block device inode are currently processed using the
>On Tue 23-02-16 05:31:13, Damien Le Moal wrote:
>>
>> >On 02/22/16 18:56, Damien Le Moal wrote:
>> >> 2) Write back of dirty pages to SMR block devices:
>> >>
>> >> Dirty pages of a block device inode are currently processed using the
>> >> generic_writepages function, which can be executed simu
On Tue 23-02-16 05:31:13, Damien Le Moal wrote:
>
> >On 02/22/16 18:56, Damien Le Moal wrote:
> >> 2) Write back of dirty pages to SMR block devices:
> >>
> >> Dirty pages of a block device inode are currently processed using the
> >> generic_writepages function, which can be executed simultaneous
>On 02/22/16 18:56, Damien Le Moal wrote:
>> 2) Write back of dirty pages to SMR block devices:
>>
>> Dirty pages of a block device inode are currently processed using the
>> generic_writepages function, which can be executed simultaneously
>> by multiple contexts (e.g sync, fsync, msync, sync_fil
On 02/22/16 18:56, Damien Le Moal wrote:
2) Write back of dirty pages to SMR block devices:
Dirty pages of a block device inode are currently processed using the
generic_writepages function, which can be executed simultaneously
by multiple contexts (e.g sync, fsync, msync, sync_file_range, etc).
Hello,
I would like to attend LSF/MM 2016 to discuss the following topics.
1) Online Logical Head Depop
Some disk drives available on the market already provide a "logical
depop" function which allows a system to decommission a defective
disk head, reformat the disk and continue using this same
Greetings:
I would like to attend LSF/MM to continue discussion related to SMR.
Besides the SMRFFS, we've been developing a zoned device mapper (ZDM).
I would like to gather feedback and direction. I can provide a quick
explanation of the architecture. This will enable existing software
to work
Hi,
I'd like to attend LSF/MM 2016.
I've been working on scsi rdma transports and the target stack for some
time now. Lately I've been looking at nvme as well and I think I can
contribute to the dm-multipath discussions in the context of nvme and
blk-mq performance. If we plan to talk about nvme
I'd like to attend LSF. I'm currently helping to maintain the qla2xxx and
bnx2fc drivers. I have been developing for low level Linux SCSI drivers
for the last 5 years and have been working with the Linux storage stack
for over 10.
Speicially, I'd like to be included in the discussions about
On 01/09/2015 05:14 PM, Ewan Milne wrote:
> I'd like to attend LSF -- I am responsible for maintaining the SCSI
> subsystem at Red Hat, and in addition to resolving issues for customers
> and partners, I've been participating in upstream development for the
> past couple of years. I have an extens
On 01/09/2015 05:14 PM, Ewan Milne wrote:
> I'd like to attend LSF -- I am responsible for maintaining the SCSI
> subsystem at Red Hat, and in addition to resolving issues for customers
> and partners, I've been participating in upstream development for the
> past couple of years. I have an extens
I'd like to attend LSF -- I am responsible for maintaining the SCSI
subsystem at Red Hat, and in addition to resolving issues for customers
and partners, I've been participating in upstream development for the
past couple of years. I have an extensive background in SCSI and OS
development, includi
On 01/08/2015 03:11 PM, Douglas Gilbert wrote:
Andy,
XCOPY is huge and keeps getting bigger (apart from the recent
trimming of the "LID1" generation mentioned above). Most SCSI
commands are sent to the LU involved, but for EXTENDED COPY
command itself, should it be sent to the source or destinati
On 15-01-08 01:58 PM, Andy Grover wrote:
On 01/07/2015 02:11 PM, Douglas Gilbert wrote:
T10 have now dropped the LID1 and LID4 stuff (its the length
of the LIST IDENTIFIER field: 1 byte or 4 bytes) by obsoleting
all LID1 variants in SPC-5 revision 01. So the LID1 variants
are now gone and the LI
On Thu, 2015-01-08 at 10:58 -0800, Andy Grover wrote:
> On 01/07/2015 02:11 PM, Douglas Gilbert wrote:
> > T10 have now dropped the LID1 and LID4 stuff (its the length
> > of the LIST IDENTIFIER field: 1 byte or 4 bytes) by obsoleting
> > all LID1 variants in SPC-5 revision 01. So the LID1 variants
On 01/07/2015 02:11 PM, Douglas Gilbert wrote:
T10 have now dropped the LID1 and LID4 stuff (its the length
of the LIST IDENTIFIER field: 1 byte or 4 bytes) by obsoleting
all LID1 variants in SPC-5 revision 01. So the LID1 variants
are now gone and the LID4 appendage is dropped from the remaining
> "Hannes" == Hannes Reinecke writes:
Hannes> But the array might prioritize 'normal' I/O requests, and treat
Hannes> XCOPY with a lower priority. So given enough load XCOPY might
Hannes> actually be slower than a normal copy ...
Yes, but may have lower impact on concurrent I/O.
--
Martin
On 01/08/2015 12:39 AM, Martin K. Petersen wrote:
>> "Hannes" == Hannes Reinecke writes:
>
> Hannes> Not quite. XCOPY is optional, and the system works well without
> Hannes> it. So the exception handling would need to copy things by hand
> Hannes> to avoid regressions.
>
> Or defer to user
> "Hannes" == Hannes Reinecke writes:
Hannes> Not quite. XCOPY is optional, and the system works well without
Hannes> it. So the exception handling would need to copy things by hand
Hannes> to avoid regressions.
Or defer to user space.
But it's really no different from how we do WRITE SAME
On 15-01-07 04:21 PM, Nicholas A. Bellinger wrote:
Hi LSF-PC folks,
I'd like to attend LSF/MM 2015 this year in Boston.
The topics I'm interested in are iSCSI/iSER multi-queue (MC/S) support
within the open-iscsi initiator, and pushing forward the pieces
necessary for EXTENDED_COPY host side su
Hi LSF-PC folks,
I'd like to attend LSF/MM 2015 this year in Boston.
The topics I'm interested in are iSCSI/iSER multi-queue (MC/S) support
within the open-iscsi initiator, and pushing forward the pieces
necessary for EXTENDED_COPY host side support so we can finally begin to
utilize copy-offload
On Sun, 2015-01-04 at 12:16 -0500, Mike Snitzer wrote:
> Hi,
>
> I'd like to attend LSF (and the first day of Vault).
>
> As a DM maintainer I'm open to discussing anything DM related. In
> particular I'd like to at least have hallway track discussions with
> key people interested in DM multipat
On 01/07/2015 12:55 AM, Martin K. Petersen wrote:
>> "Hannes" == Hannes Reinecke writes:
>
> Hannes> Yep. That definitely needs to be discussed. Especially we'd
> Hannes> need to discuss how to handle exceptions, seeing that XCOPY
> Hannes> might fail basically at any time.
>
> Like any S
> "Hannes" == Hannes Reinecke writes:
Hannes> Yep. That definitely needs to be discussed. Especially we'd
Hannes> need to discuss how to handle exceptions, seeing that XCOPY
Hannes> might fail basically at any time.
Like any SCSI command :)
--
Martin K. Petersen Oracle Linux Engine
> "Mike" == Mike Snitzer writes:
Mike> Another topic we need to come to terms with is the state of XCOPY
Mike> (whether the initial approach needs further work, etc) -- Mikulas
Mike> Patocka reworked aspects of Martin's initial approach but it
Mike> hasn't progressed upstream:
Yeah, I'd like
On 01/04/2015 06:16 PM, Mike Snitzer wrote:
> Hi,
>
> I'd like to attend LSF (and the first day of Vault).
>
> As a DM maintainer I'm open to discussing anything DM related. In
> particular I'd like to at least have hallway track discussions with
> key people interested in DM multipath support f
Hi,
I'd like to attend LSF (and the first day of Vault).
As a DM maintainer I'm open to discussing anything DM related. In
particular I'd like to at least have hallway track discussions with
key people interested in DM multipath support for blk-mq devices.
Keith Busch and I have gone ahead and i
1 - 100 of 115 matches
Mail list logo