On Tue, Jun 05, 2018 at 03:57:05PM -0700, Roland Dreier wrote:
> That makes sense but I'm not sure it covers everything. Probably the
> most common way to do NVMe/RDMA will be with a single HCA that has
> multiple ports, so there's no sensible CPU locality. On the other
> hand we want to keep bot
On Wed, Jun 06, 2018 at 12:32:21PM +0300, Sagi Grimberg wrote:
> Huh? different paths == different controllers so this sentence can't
> be right... you mean that a path selector will select a controller
> based on the home node of the local rdma device connecting to it and
> the running cpu right?
We plan to implement all the fancy NVMe standards like ANA, but it
seems that there is still a requirement to let the host side choose
policies about how to use paths (round-robin vs least queue depth for
example). Even in the modern SCSI world with VPD pages and ALUA,
there are still knobs th
> The sensible thing to do in nvme is to use different paths for
> different queues. That is e.g. in the RDMA case use the HCA closer
> to a given CPU by default. We might allow to override this for
> cases where the is a good reason, but what I really don't want is
> configurability for configur
On Mon, Jun 04, 2018 at 02:58:49PM -0700, Roland Dreier wrote:
> We plan to implement all the fancy NVMe standards like ANA, but it
> seems that there is still a requirement to let the host side choose
> policies about how to use paths (round-robin vs least queue depth for
> example). Even in the
> Moreover, I also wanted to point out that fabrics array vendors are
> building products that rely on standard nvme multipathing (and probably
> multipathing over dispersed namespaces as well), and keeping a knob that
> will keep nvme users with dm-multipath will probably not help them
> educate t
On Mon, Jun 04, 2018 at 02:46:47PM +0300, Sagi Grimberg wrote:
> I agree with Christoph that changing personality on the fly is going to
> be painful. This opt-in will need to be one-host at connect time. For
> that, we will probably need to also expose an argument in nvme-cli too.
> Changing the m
[so much for putting out flames... :/]
This projecting onto me that I've not been keeping the conversation
technical is in itself hostile. Sure I get frustrated and lash out (as
I'm _sure_ you'll feel in this reply)
You're right, I do feel this is lashing out. And I don't appreciate it.
Pleas
On Sun, Jun 03 2018 at 7:00P -0400,
Sagi Grimberg wrote:
>
> >I'm aware that most everything in multipath.conf is SCSI/FC specific.
> >That isn't the point. dm-multipath and multipathd are an existing
> >framework for managing multipath storage.
> >
> >It could be made to work with NVMe. But
I'm aware that most everything in multipath.conf is SCSI/FC specific.
That isn't the point. dm-multipath and multipathd are an existing
framework for managing multipath storage.
It could be made to work with NVMe. But yes it would not be easy.
Especially not with the native NVMe multipath cr
On Fri, Jun 01 2018 at 10:09am -0400,
Martin K. Petersen wrote:
>
> Good morning Mike,
>
> > This notion that only native NVMe multipath can be successful is utter
> > bullshit. And the mere fact that I've gotten such a reaction from a
> > select few speaks to some serious control issues.
>
>
Good morning Mike,
> This notion that only native NVMe multipath can be successful is utter
> bullshit. And the mere fact that I've gotten such a reaction from a
> select few speaks to some serious control issues.
Please stop making this personal.
> Imagine if XFS developers just one day impo
On Thu, May 31 2018 at 10:40pm -0400,
Martin K. Petersen wrote:
>
> Mike,
>
> > 1) container A is tasked with managing some dedicated NVMe technology
> > that absolutely needs native NVMe multipath.
>
> > 2) container B is tasked with offering some canned layered product
> > that was developed
On Thu, May 31 2018 at 12:34pm -0400,
Christoph Hellwig wrote:
> On Thu, May 31, 2018 at 08:37:39AM -0400, Mike Snitzer wrote:
> > I saw your reply to the 1/3 patch.. I do agree it is broken for not
> > checking if any handles are active. But that is easily fixed no?
>
> Doing a switch at runti
Mike,
> 1) container A is tasked with managing some dedicated NVMe technology
> that absolutely needs native NVMe multipath.
> 2) container B is tasked with offering some canned layered product
> that was developed ontop of dm-multipath with its own multipath-tools
> oriented APIs, etc. And it
On Thu, May 31 2018 at 12:33pm -0400,
Christoph Hellwig wrote:
> On Wed, May 30, 2018 at 06:02:06PM -0400, Mike Snitzer wrote:
> > Because once nvme_core.multipath=N is set: native NVMe multipath is then
> > not accessible from the same host. The goal of this patchset is to give
> > users choice
On Thu, May 31, 2018 at 11:37:20AM +0300, Sagi Grimberg wrote:
>> the same host with PCI NVMe could be connected to a FC network that has
>> historically always been managed via dm-multipath.. but say that
>> FC-based infrastructure gets updated to use NVMe (to leverage a wider
>> NVMe investment,
On Thu, May 31, 2018 at 08:37:39AM -0400, Mike Snitzer wrote:
> I saw your reply to the 1/3 patch.. I do agree it is broken for not
> checking if any handles are active. But that is easily fixed no?
Doing a switch at runtime simply is a really bad idea. If for some
reason we end up with a good p
On Wed, May 30, 2018 at 06:02:06PM -0400, Mike Snitzer wrote:
> Because once nvme_core.multipath=N is set: native NVMe multipath is then
> not accessible from the same host. The goal of this patchset is to give
> users choice. But not limit them to _only_ using dm-multipath if they
> just have so
On Thu, May 31 2018 at 4:51am -0400,
Sagi Grimberg wrote:
>
> >>Moreover, I also wanted to point out that fabrics array vendors are
> >>building products that rely on standard nvme multipathing (and probably
> >>multipathing over dispersed namespaces as well), and keeping a knob that
> >>will k
On Thu, May 31 2018 at 4:37am -0400,
Sagi Grimberg wrote:
>
> >Wouldn't expect you guys to nurture this 'mpath_personality' knob. SO
> >when features like "dispersed namespaces" land a negative check would
> >need to be added in the code to prevent switching from "native".
> >
> >And once some
Moreover, I also wanted to point out that fabrics array vendors are
building products that rely on standard nvme multipathing (and probably
multipathing over dispersed namespaces as well), and keeping a knob that
will keep nvme users with dm-multipath will probably not help them
educate their c
Wouldn't expect you guys to nurture this 'mpath_personality' knob. SO
when features like "dispersed namespaces" land a negative check would
need to be added in the code to prevent switching from "native".
And once something like "dispersed namespaces" lands we'd then have to
see about a more
On Tue, May 29, 2018 at 09:22:40AM +0200, Johannes Thumshirn wrote:
> On Mon, May 28, 2018 at 11:02:36PM -0400, Mike Snitzer wrote:
> > No, what both Red Hat and SUSE are saying is: cool let's have a go at
> > "Plan A" but, in parallel, what harm is there in allowing "Plan B" (dm
> > multipath) to
On Wed, May 30 2018 at 5:20pm -0400,
Sagi Grimberg wrote:
> Moreover, I also wanted to point out that fabrics array vendors are
> building products that rely on standard nvme multipathing (and probably
> multipathing over dispersed namespaces as well), and keeping a knob that
> will keep nvme u
On Wed, May 30 2018 at 5:20pm -0400,
Sagi Grimberg wrote:
> Hi Folks,
>
> I'm sorry to chime in super late on this, but a lot has been
> going on for me lately which got me off the grid.
>
> So I'll try to provide my input hopefully without starting any more
> flames..
>
> >>>This patch serie
Hi Folks,
I'm sorry to chime in super late on this, but a lot has been
going on for me lately which got me off the grid.
So I'll try to provide my input hopefully without starting any more
flames..
This patch series aims to provide a more fine grained control over
nvme's native multipathing, b
On Tue, May 29 2018 at 4:09am -0400,
Christoph Hellwig wrote:
> On Tue, May 29, 2018 at 09:22:40AM +0200, Johannes Thumshirn wrote:
> > For a "Plan B" we can still use the global knob that's already in
> > place (even if this reminds me so much about scsi-mq which at least we
> > haven't turned
On Tue, May 29, 2018 at 09:22:40AM +0200, Johannes Thumshirn wrote:
> For a "Plan B" we can still use the global knob that's already in
> place (even if this reminds me so much about scsi-mq which at least we
> haven't turned on in fear of performance regressions).
>
> Let's drop the discussion he
On Mon, May 28, 2018 at 11:02:36PM -0400, Mike Snitzer wrote:
> No, what both Red Hat and SUSE are saying is: cool let's have a go at
> "Plan A" but, in parallel, what harm is there in allowing "Plan B" (dm
> multipath) to be conditionally enabled to coexist with native NVMe
> multipath?
For a "Pl
On Mon, 28 May 2018 23:02:36 -0400
Mike Snitzer wrote:
> On Mon, May 28 2018 at 9:19pm -0400,
> Martin K. Petersen wrote:
>
> >
> > Mike,
> >
> > I understand and appreciate your position but I still don't think
> > the arguments for enabling DM multipath are sufficiently
> > compelling. The
On Mon, May 28 2018 at 9:19pm -0400,
Martin K. Petersen wrote:
>
> Mike,
>
> I understand and appreciate your position but I still don't think the
> arguments for enabling DM multipath are sufficiently compelling. The
> whole point of ANA is for things to be plug and play without any admin
> i
Mike,
I understand and appreciate your position but I still don't think the
arguments for enabling DM multipath are sufficiently compelling. The
whole point of ANA is for things to be plug and play without any admin
intervention whatsoever.
I also think we're getting ahead of ourselves a bit. T
On Fri, May 25 2018 at 10:12am -0400,
Christoph Hellwig wrote:
> On Fri, May 25, 2018 at 09:58:13AM -0400, Mike Snitzer wrote:
> > We all basically knew this would be your position. But at this year's
> > LSF we pretty quickly reached consensus that we do in fact need this.
> > Except for yourse
On Fri, May 25, 2018 at 04:22:17PM +0200, Johannes Thumshirn wrote:
> But Mike's and Hannes' arguments where reasonable as well, we do not
> know if there are any existing setups we might break leading to
> support calls, which we have to deal with. Personally I don't believe
> there are lot's of e
On Fri, May 25, 2018 at 03:05:35PM +0200, Christoph Hellwig wrote:
> On Fri, May 25, 2018 at 02:53:19PM +0200, Johannes Thumshirn wrote:
> > Hi,
> >
> > This patch series aims to provide a more fine grained control over
> > nvme's native multipathing, by allowing it to be switched on and off
> > o
On Fri, May 25, 2018 at 09:58:13AM -0400, Mike Snitzer wrote:
> We all basically knew this would be your position. But at this year's
> LSF we pretty quickly reached consensus that we do in fact need this.
> Except for yourself, Sagi and afaik Martin George: all on the cc were in
> attendance and
On Fri, May 25 2018 at 9:05am -0400,
Christoph Hellwig wrote:
> On Fri, May 25, 2018 at 02:53:19PM +0200, Johannes Thumshirn wrote:
> > Hi,
> >
> > This patch series aims to provide a more fine grained control over
> > nvme's native multipathing, by allowing it to be switched on and off
> > on
On Fri, May 25, 2018 at 02:53:19PM +0200, Johannes Thumshirn wrote:
> Hi,
>
> This patch series aims to provide a more fine grained control over
> nvme's native multipathing, by allowing it to be switched on and off
> on a per-subsystem basis instead of a big global switch.
No. The only reason w
Hi,
This patch series aims to provide a more fine grained control over
nvme's native multipathing, by allowing it to be switched on and off
on a per-subsystem basis instead of a big global switch.
The prime use-case is for mixed scenarios where user might want to use
nvme's native multipathing on
40 matches
Mail list logo