Hi,
I'm developing an application that needs a high rate of small TCP
transactions on multi-core systems, and I'm hitting a limit where a
kernel task, usually swi:net (but it depends on the driver) hits 100% of
a CPU at some transactions/s rate and blocks further performance
increase even though o
On Sun, 5 Apr 2009, Ivan Voras wrote:
I'm developing an application that needs a high rate of small TCP
transactions on multi-core systems, and I'm hitting a limit where a kernel
task, usually swi:net (but it depends on the driver) hits 100% of a CPU at
some transactions/s rate and blocks fur
On Sun, 5 Apr 2009, Ivan Voras wrote:
I thought this has something to deal with NIC moderation (em) but can't
really explain it. The bad performance part (not the jump) is also visible
over the loopback interface.
FYI, if you want high performance, you really want a card supporting multiple
Robert Watson wrote:
>
> On Sun, 5 Apr 2009, Ivan Voras wrote:
>
>> I thought this has something to deal with NIC moderation (em) but
>> can't really explain it. The bad performance part (not the jump) is
>> also visible over the loopback interface.
>
> FYI, if you want high performance, you rea
On Sun, 5 Apr 2009, Ivan Voras wrote:
I thought this has something to deal with NIC moderation (em) but can't
really explain it. The bad performance part (not the jump) is also visible
over the loopback interface.
FYI, if you want high performance, you really want a card supporting
multiple
--- On Sun, 4/5/09, Robert Watson wrote:
> From: Robert Watson
> Subject: Re: Advice on a multithreaded netisr patch?
> To: "Ivan Voras"
> Cc: freebsd-net@freebsd.org
> Date: Sunday, April 5, 2009, 9:54 AM
> On Sun, 5 Apr 2009, Ivan Voras wrote:
>
> >>> I thought this has something to dea
Upakul Barkakaty wrote:
Hi all,
I was trying to setup a multicast tunneling setup with freebsd, with the
mrouted utility. However, my multicast router doesnt seem to be forwarding
those multicast packets.
It would really be helpful if someone could help me with the setup or the
mrouted.conf fil
Robert Watson wrote:
>
> On Sun, 5 Apr 2009, Ivan Voras wrote:
>
I thought this has something to deal with NIC moderation (em) but
can't really explain it. The bad performance part (not the jump) is
also visible over the loopback interface.
>>>
>>> FYI, if you want high performance
On Sun, 5 Apr 2009, Barney Cordoba wrote:
I'm curious as to your assertion that hardware transmit queues are a big
win. You're really just loading a transmit ring well ahead of actual
transmission; there's no need to force a "start" for each packet queued. You
then have more overheard managing
> Date: Sun, 5 Apr 2009 10:25:41 -0700 (PDT)
> From: Barney Cordoba
> Sender: owner-freebsd-...@freebsd.org
>
>
> As an aside, why is Kip doing development on a Chelsio card rather
> than a more mainstream product such as Intel or Broadcom that would
> generate more widespread interest?
Because
--- On Sun, 4/5/09, Kevin Oberman wrote:
> From: Kevin Oberman
> Subject: Re: Advice on a multithreaded netisr patch?
> To: barney_cord...@yahoo.com
> Cc: "Ivan Voras" , "Robert Watson" ,
> freebsd-net@freebsd.org
> Date: Sunday, April 5, 2009, 5:24 PM
> > Date: Sun, 5 Apr 2009 10:25:41 -07
On 7-STABLE, with kern.ipc.maxsockbuf=2621440, both sides set a window
scaling factor of 6 (i.e. SYN wscale 6, SYN-ACK wscale 6) using IPv4.
With the same value of kern.ipc.maxsockbuf, using IPv6, the side which
sends the initial SYN sets a window scaling factor of only 1, while
the other side set
On Sun, 5 Apr 2009, sth...@nethelp.no wrote:
On 7-STABLE, with kern.ipc.maxsockbuf=2621440, both sides set a window
scaling factor of 6 (i.e. SYN wscale 6, SYN-ACK wscale 6) using IPv4.
With the same value of kern.ipc.maxsockbuf, using IPv6, the side which
sends the initial SYN sets a window sc
On Sun, 5 Apr 2009, Bjoern A. Zeeb wrote:
On Sun, 5 Apr 2009, sth...@nethelp.no wrote:
On 7-STABLE, with kern.ipc.maxsockbuf=2621440, both sides set a window
scaling factor of 6 (i.e. SYN wscale 6, SYN-ACK wscale 6) using IPv4.
With the same value of kern.ipc.maxsockbuf, using IPv6, the side
On Sun, 5 Apr 2009, Ivan Voras wrote:
The argument is not that they are slower (although they probably are a bit
slower), rather that they introduce serialization bottlenecks by requiring
synchronization between CPUs in order to distribute the work. Certainly
some of the scalability issues in
Thanks for the ideas, I will try some of them. But I'd also like some
more clarifications:
Robert Watson wrote:
> On Sun, 5 Apr 2009, Ivan Voras wrote:
>> I'd like to understand more. If (in netisr) I have a mbuf with
>> headers, is this data already transfered from the card or is it
>> magically
Synopsis: [carp] [hang] use of carp(4) causes system to freeze
Responsible-Changed-From-To: freebsd-i386->freebsd-net
Responsible-Changed-By: linimon
Responsible-Changed-When: Mon Apr 6 06:24:37 UTC 2009
Responsible-Changed-Why:
This does not sound i386-specific.
http://www.freebsd.org/cgi/query
17 matches
Mail list logo