On Fri, Apr 11, 2014 at 8:23 PM, hiren panchasara <
hiren.panchas...@gmail.com> wrote:
> On Fri, Apr 11, 2014 at 11:30 AM, Patrick Kelsey wrote:
> >
> > The output of netstat -Q shows IP dispatch is set to default, which is
> > direct (NETISR_DISPATCH_DIRECT). That means each IP packet will be
>
On Fri, Apr 11, 2014 at 11:30 AM, Patrick Kelsey wrote:
>
> The output of netstat -Q shows IP dispatch is set to default, which is
> direct (NETISR_DISPATCH_DIRECT). That means each IP packet will be
> processed on the same CPU that the Ethernet processing for that packet was
> performed on, so C
On Fri, Apr 11, 2014 at 4:16 PM, hiren panchasara
wrote:
> On Fri, Apr 11, 2014 at 4:15 AM, Eggert, Lars wrote:
>> Hi,
>>
>> since folks are playing with Midori's DCTCP patch, I wanted to make sure
>> that you were also aware of the patches that Aris did for PRR and NewCWV...
>
>>>
>>>
>
> Lars
Well, ethernet drivers nowdays seem to be doing:
* always queue
* then pop the head item off the queue and transmit that.
-a
On 11 April 2014 11:59, Julian Elischer wrote:
> disclaimer: I'm not looking at the code now.. I want to go to bed: :-)
>
> When I wrote that code, the idea was that e
disclaimer: I'm not looking at the code now.. I want to go to bed: :-)
When I wrote that code, the idea was that even a direct node execution
should become a queuing operation if there was already something else
on the queue. so in that model packets were not supposed to get
re-ordered. does
On Fri, Apr 11, 2014 at 2:48 AM, Adrian Chadd wrote:
> [snip]
>
> So, hm, the thing that comes to mind is the flowid. What's the various
> flowid's for flows? Are they all mapping to CPU 3 somehow
The output of netstat -Q shows IP dispatch is set to default, which is
direct (NETISR_DISPATCH_DIR
On Fri, Apr 11, 2014 at 4:15 AM, Eggert, Lars wrote:
> Hi,
>
> since folks are playing with Midori's DCTCP patch, I wanted to make sure that
> you were also aware of the patches that Aris did for PRR and NewCWV...
>>
>>
Lars,
There are no actual patches attached here. (Or the mailing-list dro
Hi,
I had similar problem on the past and it turned to be the ammount of rules
in ipfe.
Using reduced subset with tables actually reduced the load.
Sami
בתאריך יום שישי, 11 באפריל 2014, Dennis Yusupoff כתב:
> Good day, gurus!
>
> We have a servers on the FreeBSD. They do NAT, shaping and traff
Hi,
since folks are playing with Midori's DCTCP patch, I wanted to make sure that
you were also aware of the patches that Aris did for PRR and NewCWV...
Lars
On 2014-2-4, at 10:38, Eggert, Lars wrote:
> Hi,
>
> below are two patches that implement RFC6937 ("Proportional Rate Reduction
> for
Good day, gurus!
We have a servers on the FreeBSD. They do NAT, shaping and traffic
accounting for our home (mainly) customers.
NAT realized with pf nat, shaping with ipfw dummynet and traffic
accounting with ng_netflow via ipfw ng_tee.
The problem is performance on (relatively) high traffic.
On X
Hi.
Can someone explain me where are the 4 missing bytes when capturing
traffic on a gif interface with a tcpdump ?
I expect to see the length of the first fragment (offset = 0) to be
equal to an mtu (1280 bytes), but clearly it's 1276 bytes.
Same thing happens to a gre tunnel.
# ifconfig gif0
gi
11 matches
Mail list logo