On 06/30/2010 10:01 AM, Alexander Sack wrote:
M
On Mon, Jun 28, 2010 at 12:37 PM, Mike Carlson wrote:
I've got a 10Gb intel nic on a FreeBSD 8.0-p3/AMD64 system, using the ix
driver:
ix0: port
0xdce0-0xdcff mem 0xdf3a-0xdf3b,0xdf3c-
0xdf3f,0xdf39c000-0xdf39 irq 35 at
M
On Mon, Jun 28, 2010 at 12:37 PM, Mike Carlson wrote:
> I've got a 10Gb intel nic on a FreeBSD 8.0-p3/AMD64 system, using the ix
> driver:
>
> ix0: port
> 0xdce0-0xdcff mem 0xdf3a-0xdf3b,0xdf3c-
> 0xdf3f,0xdf39c000-0xdf39 irq 35 at device 0.0 on pci5
> ix0: Using MSIX inter
I've got a 10Gb intel nic on a FreeBSD 8.0-p3/AMD64 system, using the ix
driver:
ix0:
port 0xdce0-0xdcff mem 0xdf3a-0xdf3b,0xdf3c-
0xdf3f,0xdf39c000-0xdf39 irq 35 at device 0.0 on pci5
ix0: Using MSIX interrupts with 17 vectors
ix0: [ITHREAD]
...
ix0: Ethernet address: 00:
On Sat, May 15, 2010 at 9:23 AM, Barney Cordoba
wrote:
>
>
> --- On Fri, 5/14/10, Alexander Sack wrote:
>
>> From: Alexander Sack
>> Subject: Re: Intel 10Gb
>> To: "Jack Vogel"
>> Cc: "Murat Balaban" , freebsd-net@freebsd.org,
>&g
--- On Fri, 5/14/10, Alexander Sack wrote:
> From: Alexander Sack
> Subject: Re: Intel 10Gb
> To: "Jack Vogel"
> Cc: "Murat Balaban" , freebsd-net@freebsd.org,
> freebsd-performa...@freebsd.org, "Andrew Gallatin"
> Date: Friday, May 14, 2010,
On Fri, May 14, 2010 at 1:01 PM, Jack Vogel wrote:
>
>
> On Fri, May 14, 2010 at 8:18 AM, Alexander Sack wrote:
>>
>> On Fri, May 14, 2010 at 10:07 AM, Andrew Gallatin
>> wrote:
>> > Alexander Sack wrote:
>> > <...>
>> >>> Using this driver/firmware combo, we can receive minimal packets at
>> >>
On Fri, May 14, 2010 at 8:18 AM, Alexander Sack wrote:
> On Fri, May 14, 2010 at 10:07 AM, Andrew Gallatin
> wrote:
> > Alexander Sack wrote:
> > <...>
> >>> Using this driver/firmware combo, we can receive minimal packets at
> >>> line rate (14.8Mpps) to userspace. You can even access this usi
o pass the FCS to the host."
> -Original Message-
> From: owner-freebsd-performa...@freebsd.org [mailto:owner-freebsd-
> performa...@freebsd.org] On Behalf Of Andrew Gallatin
> Sent: Friday, May 14, 2010 8:41 AM
> To: Alexander Sack
> Cc: Murat Balaban; freebsd-net@freeb
Alexander Sack wrote:
To use DCA you need:
- A DCA driver to talk to the IOATDMA/DCA pcie device, and obtain the tag
table
- An interface that a client device (eg, NIC driver) can use to obtain
either the tag table, or at least the correct tag for the CPU
that the interrupt
On Fri, May 14, 2010 at 11:41 AM, Andrew Gallatin wrote:
> Alexander Sack wrote:
>> On Fri, May 14, 2010 at 10:07 AM, Andrew Gallatin
>> wrote:
>>> Alexander Sack wrote:
>>> <...>
> Using this driver/firmware combo, we can receive minimal packets at
> line rate (14.8Mpps) to userspace. Y
Alexander Sack wrote:
> On Fri, May 14, 2010 at 10:07 AM, Andrew Gallatin
wrote:
>> Alexander Sack wrote:
>> <...>
Using this driver/firmware combo, we can receive minimal packets at
line rate (14.8Mpps) to userspace. You can even access this using a
libpcap interface. The tric
On Fri, May 14, 2010 at 10:07 AM, Andrew Gallatin wrote:
> Alexander Sack wrote:
> <...>
>>> Using this driver/firmware combo, we can receive minimal packets at
>>> line rate (14.8Mpps) to userspace. You can even access this using a
>>> libpcap interface. The trick is that the fast paths are OS-
Alexander Sack wrote:
<...>
>> Using this driver/firmware combo, we can receive minimal packets at
>> line rate (14.8Mpps) to userspace. You can even access this using a
>> libpcap interface. The trick is that the fast paths are OS-bypass,
>> and don't suffer from OS overheads, like lock content
On Tue, May 11, 2010 at 9:51 AM, Andrew Gallatin wrote:
> Murat Balaban [mu...@enderunix.org] wrote:
>>
>> Much of the FreeBSD networking stack has been made parallel in order to
>> cope with high packet rates at 10 Gig/sec operation.
>>
>> I've seen good numbers (near 10 Gig) in my tests involvin
Murat Balaban [mu...@enderunix.org] wrote:
>
> Much of the FreeBSD networking stack has been made parallel in order to
> cope with high packet rates at 10 Gig/sec operation.
>
> I've seen good numbers (near 10 Gig) in my tests involving TCP/UDP
> send/receive. (latest Intel driver).
>
> As far
--- On Sun, 5/9/10, Jack Vogel wrote:
> From: Jack Vogel
> Subject: Re: Intel 10Gb
> To: "Barney Cordoba"
> Cc: "Murat Balaban" , freebsd-net@freebsd.org,
> freebsd-performa...@freebsd.org, "grarpamp" , "Vincent
> Hoffman"
> Dat
On 05/09/10 14:43, Barney Cordoba wrote:
Blah, Blah, Blah. Let's see some real numbers on real networks under
real loads. Until then, you've got nothing.
But that's also blah blah on your part, Barney.
As long as the peak throughput under load in testing is above the median
range for the
On Sun, May 9, 2010 at 6:43 AM, Barney Cordoba wrote:
>
>
> --- On Sat, 5/8/10, Murat Balaban wrote:
>
> > From: Murat Balaban
> > Subject: Re: Intel 10Gb
> > To: "Vincent Hoffman"
> > Cc: freebsd-net@freebsd.org, freebsd-performa...@freebsd.org,
--- On Sat, 5/8/10, Murat Balaban wrote:
> From: Murat Balaban
> Subject: Re: Intel 10Gb
> To: "Vincent Hoffman"
> Cc: freebsd-net@freebsd.org, freebsd-performa...@freebsd.org, "grarpamp"
>
> Date: Saturday, May 8, 2010, 8:59 AM
>
> Much o
Much of the FreeBSD networking stack has been made parallel in order to
cope with high packet rates at 10 Gig/sec operation.
I've seen good numbers (near 10 Gig) in my tests involving TCP/UDP
send/receive. (latest Intel driver).
As far as BPF is concerned, above statement does not hold true,
si
Looks a little like
http://lists.freebsd.org/pipermail/svn-src-all/2010-May/023679.html
but for intel. cool.
Vince
On 07/05/2010 23:01, grarpamp wrote:
> Just wondering in general these days how close FreeBSD is to
> full 10Gb rates at various packet sizes from minimum ethernet
> frame to max jumb
Just wondering in general these days how close FreeBSD is to
full 10Gb rates at various packet sizes from minimum ethernet
frame to max jumbo 65k++. For things like BPF, ipfw/pf, routing,
switching, etc.
http://www.ntop.org/blog/?p=86
___
freebsd-net@free
22 matches
Mail list logo