>
>
> > Hey Sagi,
> >
> > The patch works allowing connections for the various affinity mappings
> below:
> >
> > One comp_vector per core across all cores, starting with numa-local cores:
>
> Thanks Steve, is this your "Tested by:" tag?
Sure:
Tested-by: Steve Wise
Hi Jason,
The new patchworks doesn't grab patches inlined in messages, so you
will need to resend it.
Yes, just wanted to to add Steve's tested by as its going to
lists that did not follow this thread.
Also, can someone remind me what the outcome is here? Does it
supersede Leon's patch:
htt
On Fri, Aug 17, 2018 at 01:03:20PM -0700, Sagi Grimberg wrote:
>
> > Hey Sagi,
> >
> > The patch works allowing connections for the various affinity mappings
> > below:
> >
> > One comp_vector per core across all cores, starting with numa-local cores:
>
> Thanks Steve, is this your "Tested by:
Hey Sagi,
The patch works allowing connections for the various affinity mappings below:
One comp_vector per core across all cores, starting with numa-local cores:
Thanks Steve, is this your "Tested by:" tag?
> On 8/16/2018 1:26 PM, Sagi Grimberg wrote:
> >
> >> Let me know if you want me to try this or any particular fix.
> >
> > Steve, can you test this one?
>
> Yes! I'll try it out tomorrow.
>
> Stevo
>
Hey Sagi,
The patch works allowing connections for the various affinity mappings below:
One
On 8/16/2018 1:26 PM, Sagi Grimberg wrote:
>
>> Let me know if you want me to try this or any particular fix.
>
> Steve, can you test this one?
Yes! I'll try it out tomorrow.
Stevo
> --
> [PATCH rfc] block: fix rdma queue mapping
>
> nvme-rdma attempts to map queues based on irq vector aff
Let me know if you want me to try this or any particular fix.
Steve, can you test this one?
--
[PATCH rfc] block: fix rdma queue mapping
nvme-rdma attempts to map queues based on irq vector affinity.
However, for some devices, completion vector irq affinity is
configurable by the user which
On Mon, Aug 06, 2018 at 02:20:37PM -0500, Steve Wise wrote:
>
>
> On 8/1/2018 9:27 AM, Max Gurtovoy wrote:
> >
> >
> > On 8/1/2018 8:12 AM, Sagi Grimberg wrote:
> >> Hi Max,
> >
> > Hi,
> >
> >>
> >>> Yes, since nvmf is the only user of this function.
> >>> Still waiting for comments on the suggest
On 8/1/2018 9:27 AM, Max Gurtovoy wrote:
>
>
> On 8/1/2018 8:12 AM, Sagi Grimberg wrote:
>> Hi Max,
>
> Hi,
>
>>
>>> Yes, since nvmf is the only user of this function.
>>> Still waiting for comments on the suggested patch :)
>>>
>>
>> Sorry for the late response (but I'm on vacation so I have
>>
On 8/1/2018 8:12 AM, Sagi Grimberg wrote:
Hi Max,
Hi,
Yes, since nvmf is the only user of this function.
Still waiting for comments on the suggested patch :)
Sorry for the late response (but I'm on vacation so I have
an excuse ;))
NP :) currently the code works..
I'm thinking that
Hi Max,
Yes, since nvmf is the only user of this function.
Still waiting for comments on the suggested patch :)
Sorry for the late response (but I'm on vacation so I have
an excuse ;))
I'm thinking that we should avoid trying to find an assignment
when stuff like irqbalance daemon is running
On 7/30/2018 6:47 PM, Steve Wise wrote:
On 7/23/2018 11:53 AM, Max Gurtovoy wrote:
On 7/23/2018 7:49 PM, Jason Gunthorpe wrote:
On Fri, Jul 20, 2018 at 04:25:32AM +0300, Max Gurtovoy wrote:
[ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18
queue 9 is not mapped (overlap)
On 7/23/2018 11:53 AM, Max Gurtovoy wrote:
>
>
> On 7/23/2018 7:49 PM, Jason Gunthorpe wrote:
>> On Fri, Jul 20, 2018 at 04:25:32AM +0300, Max Gurtovoy wrote:
>>>
>> [ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18
>
> queue 9 is not mapped (overlap).
> please try th
On 7/24/2018 10:24 AM, Steve Wise wrote:
>
> On 7/19/2018 8:25 PM, Max Gurtovoy wrote:
> [ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18
queue 9 is not mapped (overlap).
please try the bellow:
>>> This seems to work. Here are three mapping cases: each vector on
On 7/19/2018 8:25 PM, Max Gurtovoy wrote:
>
[ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18
>>>
>>> queue 9 is not mapped (overlap).
>>> please try the bellow:
>>>
>>
>> This seems to work. Here are three mapping cases: each vector on its
>> own cpu, each vector on 1 cpu wit
On 7/23/2018 7:49 PM, Jason Gunthorpe wrote:
On Fri, Jul 20, 2018 at 04:25:32AM +0300, Max Gurtovoy wrote:
[ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18
queue 9 is not mapped (overlap).
please try the bellow:
This seems to work. Here are three mapping cases: each vect
On Fri, Jul 20, 2018 at 04:25:32AM +0300, Max Gurtovoy wrote:
>
> >>>[ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18
> >>
> >>queue 9 is not mapped (overlap).
> >>please try the bellow:
> >>
> >
> >This seems to work. Here are three mapping cases: each vector on its
> >own cpu, eac
[ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18
queue 9 is not mapped (overlap).
please try the bellow:
This seems to work. Here are three mapping cases: each vector on its
own cpu, each vector on 1 cpu within the local numa node, and each
vector having all cpus in its numa
On 7/19/2018 9:50 AM, Max Gurtovoy wrote:
>
>
> On 7/18/2018 10:29 PM, Steve Wise wrote:
>>
>>>
>>> On 7/18/2018 2:38 PM, Sagi Grimberg wrote:
>> IMO we must fulfil the user wish to connect to N queues and not
>> reduce
>> it because of affinity overlaps. So in order to push Leo
On 7/18/2018 10:29 PM, Steve Wise wrote:
On 7/18/2018 2:38 PM, Sagi Grimberg wrote:
IMO we must fulfil the user wish to connect to N queues and not reduce
it because of affinity overlaps. So in order to push Leon's patch we
must also fix the blk_mq_rdma_map_queues to do a best effort
ma
>
> On 7/18/2018 2:38 PM, Sagi Grimberg wrote:
> >
> >>> IMO we must fulfil the user wish to connect to N queues and not reduce
> >>> it because of affinity overlaps. So in order to push Leon's patch we
> >>> must also fix the blk_mq_rdma_map_queues to do a best effort
> mapping
> >>> according t
d Mahameed' ; 'linux-netdev'
>
> Subject: Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity
> mask
>
>
>
> On 7/18/2018 2:38 PM, Sagi Grimberg wrote:
> >
> >>> IMO we must fulfil the user wish to connect to N queues and not red
On 7/18/2018 2:38 PM, Sagi Grimberg wrote:
IMO we must fulfil the user wish to connect to N queues and not reduce
it because of affinity overlaps. So in order to push Leon's patch we
must also fix the blk_mq_rdma_map_queues to do a best effort mapping
according the affinity and map the rest
IMO we must fulfil the user wish to connect to N queues and not reduce
it because of affinity overlaps. So in order to push Leon's patch we
must also fix the blk_mq_rdma_map_queues to do a best effort mapping
according the affinity and map the rest in naive way (in that way we
will *always* map
> On 7/16/2018 8:08 PM, Steve Wise wrote:
> > Hey Max:
> >
> >
>
> Hey,
>
> > On 7/16/2018 11:46 AM, Max Gurtovoy wrote:
> >>
> >>
> >> On 7/16/2018 5:59 PM, Sagi Grimberg wrote:
> >>>
> Hi,
> I've tested this patch and seems problematic at this moment.
> >>>
> >>> Problematic how? wh
On 7/17/2018 11:58 AM, Leon Romanovsky wrote:
On Tue, Jul 17, 2018 at 11:46:40AM +0300, Max Gurtovoy wrote:
On 7/16/2018 8:08 PM, Steve Wise wrote:
Hey Max:
Hey,
On 7/16/2018 11:46 AM, Max Gurtovoy wrote:
On 7/16/2018 5:59 PM, Sagi Grimberg wrote:
Hi,
I've tested this patch and
On Tue, Jul 17, 2018 at 11:46:40AM +0300, Max Gurtovoy wrote:
>
>
> On 7/16/2018 8:08 PM, Steve Wise wrote:
> > Hey Max:
> >
> >
>
> Hey,
>
> > On 7/16/2018 11:46 AM, Max Gurtovoy wrote:
> > >
> > >
> > > On 7/16/2018 5:59 PM, Sagi Grimberg wrote:
> > > >
> > > > > Hi,
> > > > > I've tested this pa
On 7/16/2018 8:08 PM, Steve Wise wrote:
Hey Max:
Hey,
On 7/16/2018 11:46 AM, Max Gurtovoy wrote:
On 7/16/2018 5:59 PM, Sagi Grimberg wrote:
Hi,
I've tested this patch and seems problematic at this moment.
Problematic how? what are you seeing?
Connection failures and same error
Hey Max:
On 7/16/2018 11:46 AM, Max Gurtovoy wrote:
>
>
> On 7/16/2018 5:59 PM, Sagi Grimberg wrote:
>>
>>> Hi,
>>> I've tested this patch and seems problematic at this moment.
>>
>> Problematic how? what are you seeing?
>
> Connection failures and same error Steve saw:
>
> [Mon Jul 16 16:19:11 2
On 7/16/2018 5:59 PM, Sagi Grimberg wrote:
Hi,
I've tested this patch and seems problematic at this moment.
Problematic how? what are you seeing?
Connection failures and same error Steve saw:
[Mon Jul 16 16:19:11 2018] nvme nvme0: Connect command failed, error
wo/DNR bit: -16402
[Mon
Hi,
I've tested this patch and seems problematic at this moment.
Problematic how? what are you seeing?
maybe this is because of the bug that Steve mentioned in the NVMe
mailing list. Sagi mentioned that we should fix it in the NVMe/RDMA
initiator and I'll run his suggestion as well.
Is y
Hi,
I've tested this patch and seems problematic at this moment.
maybe this is because of the bug that Steve mentioned in the NVMe
mailing list. Sagi mentioned that we should fix it in the NVMe/RDMA
initiator and I'll run his suggestion as well.
BTW, when I run the blk_mq_map_queues it works fo
On Mon, Jul 16, 2018 at 01:23:24PM +0300, Sagi Grimberg wrote:
> Leon, I'd like to see a tested-by tag for this (at least
> until I get some time to test it).
Of course.
Thanks
>
> The patch itself looks fine to me.
signature.asc
Description: PGP signature
Leon, I'd like to see a tested-by tag for this (at least
until I get some time to test it).
The patch itself looks fine to me.
34 matches
Mail list logo