On Tue, Apr 19, 2011 at 12:06 PM, K. Macy wrote:
> On Tue, Apr 19, 2011 at 8:19 PM, Freddie Cash wrote:
>> On Tue, Apr 19, 2011 at 7:42 AM, K. Macy wrote:
I'm not able to find IFNET_MULTIQUEUE in a recent 8.2-STABLE, is this
something
present only in HEAD?
>>>
>>> It looks like i
On Tue, Apr 19, 2011 at 8:19 PM, Freddie Cash wrote:
> On Tue, Apr 19, 2011 at 7:42 AM, K. Macy wrote:
>>> I'm not able to find IFNET_MULTIQUEUE in a recent 8.2-STABLE, is this
>>> something
>>> present only in HEAD?
>>
>> It looks like it is now EM_MULTIQUEUE.
>
> Just curious, how would one en
On Tue, Apr 19, 2011 at 7:42 AM, K. Macy wrote:
>> I'm not able to find IFNET_MULTIQUEUE in a recent 8.2-STABLE, is this
>> something
>> present only in HEAD?
>
> It looks like it is now EM_MULTIQUEUE.
Just curious, how would one enable this to test it? We have igb(4)
interfaces in our new stor
> Hi,
>
> I'm not able to find IFNET_MULTIQUEUE in a recent 8.2-STABLE, is this
> something
> present only in HEAD?
It looks like it is now EM_MULTIQUEUE.
Cheers
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebs
On Apr 18, 2011, at 8:59 PM, K. Macy wrote:
> It occurred to me that I should add a couple of qualifications to the
> previous statements. 1.6Mpps is line rate for GigE and I only know of
> it to be achievable by igb hardware. The most I've seen em hardware
> achieve is 1.1Mpps. Furthermore, in or
It occurred to me that I should add a couple of qualifications to the
previous statements. 1.6Mpps is line rate for GigE and I only know of
it to be achievable by igb hardware. The most I've seen em hardware
achieve is 1.1Mpps. Furthermore, in order to achieve that you would
have to enable IFNET
On Mon, Apr 18, 2011 at 7:28 PM, K. Macy wrote:
> 400kpps is not a large enough measure to reach any conclusions. A
> system like that should be able to push at least 2.3Mpps with
> flowtable. I'm not saying that what you've done is not an improvement,
> but rather that you're hitting some other b
400kpps is not a large enough measure to reach any conclusions. A
system like that should be able to push at least 2.3Mpps with
flowtable. I'm not saying that what you've done is not an improvement,
but rather that you're hitting some other bottleneck. The output of
pmc and LOCK_PROFILING might be
It would be great to see flowtable going back to its intended use.
However, I would be surprised if this actually scales to Mpps. I don't
have any high end hardware at the moment to test, what is the highest
packet rate you've seen? i.e. simply generating small packets.
Currently I have no tes
It would be great to see flowtable going back to its intended use.
However, I would be surprised if this actually scales to Mpps. I don't
have any high end hardware at the moment to test, what is the highest
packet rate you've seen? i.e. simply generating small packets.
Thanks
On Mon, Apr 18, 20
Hi,
regarding multipath problems:
Setup which should work:
++ ++
|Router A|(ospf)|Router B|
++ ++
|(carp)|(carp)
| |
+--+---+
|
La
example that does not work:
ifconfig em0 192.168.0.1/24
ifconfig em1 10.0.0.1/24
route add 10.0.0.0/24 192.168.0.2
What doesn't work ? The add or the delete operation?
I can add and delete the 10.0.0.0/24 route fine on my system.
try the attached script.
now with script.
Kind regards,
Hi,
kern/155772 can be resolved using RADIX_MPATH.
regarding kern/155772:
at stock 8.2 FreeBSD the system panics after ifconfig down / ifconfig up /
ifconfig down with 1 route and 1 interface route (multipath).
What's the exact step and a specific example that triggers a panic ?
ifconf
Hi,
I see,
What you are saying is the "rtalloc()" call does not have an indicator whether
it should be searching
for an interface route or not.
In the case when RADIX_MPATH is enabled, in_lltable_rtcheck() needs to walk the
ECMP route chain
to find an interface route.
yes.
Bye,
?
-- Qing
From: owner-freebsd-...@freebsd.org [owner-freebsd-...@freebsd.org] on behalf
of Ingo Flaschberger [i...@freebsd.org]
Sent: Tuesday, April 05, 2011 8:31 AM
Cc: Nikolay Denev; freebsd-net@freebsd.org
Subject: Re: Routing enhancement - reduce routing tab
kern/155772 can be resolved using RADIX_MPATH.
>
> regarding kern/155772:
> at stock 8.2 FreeBSD the system panics after ifconfig down / ifconfig up /
> ifconfig down with 1 route and 1 interface route (multipath).
>
What's the exact step and a specific example that triggers a panic ?
>
> A
Can you say something more about :
"implement some multipath changes to use a direct attached
interface route and a real route, used some OpenBSD code"
I've looked at the patch but it's not obvious to me.
P.S.: I've just saw your reply to kern/155772 and was wondering if this
patch can
Can you say something more about :
"implement some multipath changes to use a direct attached
interface route and a real route, used some OpenBSD code"
I've looked at the patch but it's not obvious to me.
P.S.: I've just saw your reply to kern/155772 and was wondering if this patch
ca
On Apr 5, 2011, at 4:26 AM, Ingo Flaschberger wrote:
> Hi,
>
> I have written a patch to:
> *) reduce locking of routing table to achieve the same speed as with
> flowtables, which do not scale with many routes:
> use of a copy of the route
> use rm_lock(9)
> (idea of Andre Op
19 matches
Mail list logo