> I tested this by simulating a slow passive side responder, and it worked as
> expected for those tests. Using an MRA does add another MAD to the CM
> exchange,
> which is why it is sent only after seeing a duplicate request.
> Alternatively,
> we can take the OFED module parameter patch
>Umm... this is a difficult situation for me to merge the changes then.
>We're changing the CM retry behavior blind here. How do we know that
>the MRA changes don't make the scalability issue worse?
What's currently upstream doesn't work for Intel MPI on our larger clusters.
The connection reques
> >OK -- just to make sure I'm understanding what you're saying: have you
> >confirmed that your proposed [CM MRA] patches actually fix the issue?
>
> Not directly. I cannot easily test kernel patches on our larger, production
> clusters. We've seen the issue with specific applications on 5
Hal Rosenstock wrote:
Has anyone tested these with QoS actually be used ? I suppose this
requires Connect-X.
You can test it with a switch without ConnectX.
If you want that the HCA will react to the QoS setting too then you
should have ConnectX
Tziporet
-
To unsubscribe from this list
On Thursday 13 September 2007 20:57, Roland Dreier wrote:
> HW specific:
>
> - I already merged patches to enable MSI-X by default for mthca and
> mlx4. I hope there aren't too many systems that get hosed if a
> MSI-X interrupt is generated.
>
> - Jack and Michael's mlx4 FMR support. Wi
> The IGMP enabling patch posted by me on September 2nd isn't on your list
> http://lists.openfabrics.org/pipermail/general/2007-September/040250.html
> can you add it?
Yes, I lost that somehow. I will add it to my list of things to take
a look at (no opinion yet).
- R.
-
To unsubscribe from
Roland Dreier wrote:
With 2.6.24 probably opening in the not-too-distant future, it's
probably a good time to review what my plans are for when the merge
window opens.
Core:
- Sean's QoS changes. These look fine at first glance, and I just
plan to understand the backwards compatibility st
Roland Dreier wrote:
> I was about to post v2 of my patch to avoid port space collisions with
> the native stack. Can we get that 2.6.24? It is high priority
> IMO. I've tried to solicit review on it, but I think folks are
> reluctant... ;-)
I would like to get this in, but I'm still at
On Fri, 2007-09-14 at 09:18 -0700, Roland Dreier wrote:
> However, do you have any plans to support iSCSI offload for targets?
> Also, looking at the first CNIC patch, I can't help but notice that
> you seem to have at least some support for iWARP there. How does the
> CNIC look? Does it share t
>OK -- just to make sure I'm understanding what you're saying: have you
>confirmed that your proposed patches actually fix the issue?
Not directly. I cannot easily test kernel patches on our larger, production
clusters. We've seen the issue with specific applications on 512 and 1024
cores, but I
> > I've been meaning to track down the bnx2 iscsi offload patch to look
> > and see if this issue is addressed, since the same problem seems to
> > exist: it seems an iscsi connection and a main stack tcp connection
> > might share the same 4-tuple unless something is done to avoid that
> > h
On Thu, Sep 13, 2007 at 01:59:21PM -0500, Steve Wise ([EMAIL PROTECTED]) wrote:
> >Well, if it involves /sharing/ port space with the native stack, i.e.
> >where port 1234 is IB but 1235 is Linux, pretty much all the networking
> >devs have NAK'd that approach AFAICS.
>
> Jeff, I posted a fix th
On Thu, 2007-09-13 at 14:11 -0700, Roland Dreier wrote:
>
> I've been meaning to track down the bnx2 iscsi offload patch to look
> and see if this issue is addressed, since the same problem seems to
> exist: it seems an iscsi connection and a main stack tcp connection
> might share the same 4-tup
> Well, if it involves /sharing/ port space with the native stack,
> i.e. where port 1234 is IB but 1235 is Linux, pretty much all the
> networking devs have NAK'd that approach AFAICS.
Just to be clear, InfiniBand has no problem; the issue is port
collisions involving iWARP connections.
- R.
> I was about to post v2 of my patch to avoid port space collisions with
> the native stack. Can we get that 2.6.24? It is high priority
> IMO. I've tried to solicit review on it, but I think folks are
> reluctant... ;-)
I would like to get this in, but I'm still at least a little
reluctant,
> > - My user_mad P_Key index support patch. I'll test the ioctl to
> > change to the new mode and merge this I guess, since Hal and Sean
> > have tested this out.
>
> I can give this patch a reviewed-by: too, and I will also try to review a
> couple
> of the pending ipoib patches.
T
Steve Wise wrote:
Jeff Garzik wrote:
Steve Wise wrote:
I was about to post v2 of my patch to avoid port space collisions
with the native stack. Can we get that 2.6.24? It is high priority
IMO. I've tried to solicit review on it, but I think folks are
reluctant... ;-)
Well, if it involves
Jeff Garzik wrote:
Steve Wise wrote:
I was about to post v2 of my patch to avoid port space collisions with
the native stack. Can we get that 2.6.24? It is high priority IMO.
I've tried to solicit review on it, but I think folks are reluctant...
;-)
Well, if it involves /sharing/ port sp
Steve Wise wrote:
I was about to post v2 of my patch to avoid port space collisions with
the native stack. Can we get that 2.6.24? It is high priority IMO.
I've tried to solicit review on it, but I think folks are reluctant... ;-)
Well, if it involves /sharing/ port space with the native sta
> - My user_mad P_Key index support patch. I'll test the ioctl to
> change to the new mode and merge this I guess, since Hal and Sean
> have tested this out.
I can give this patch a reviewed-by: too, and I will also try to review a couple
of the pending ipoib patches.
> - Sean's QoS changes.
Hey Roland,
I was about to post v2 of my patch to avoid port space collisions with
the native stack. Can we get that 2.6.24? It is high priority IMO.
I've tried to solicit review on it, but I think folks are reluctant... ;-)
Steve.
Roland Dreier wrote:
With 2.6.24 probably opening in th
21 matches
Mail list logo