Hello,
I have an intel gigabit network adapter (the 1000 GT w/chipset 82541PI) which
performs poorly in Freebsd compared to the same card in Linux. I've tried this
card in two different freebsd boxes and for whatever reason I get poor transmit
performance. I've done all of the tweaking specif
On Thu, Apr 28, 2011 at 2:29 AM, Adam Stylinski wrote:
> Hello,
>
> I have an intel gigabit network adapter (the 1000 GT w/chipset 82541PI)
> which performs poorly in Freebsd compared to the same card in Linux. I've
> tried this card in two different freebsd boxes and for whatever reason I get
>
On 28.04.2011 14:29, Adam Stylinski wrote:
> Hello,
>
> I have an intel gigabit network adapter (the 1000 GT w/chipset 82541PI) which
> performs poorly in Freebsd compared to the same card in Linux. I've tried
> this card in two different freebsd boxes and for whatever reason I get poor
> tran
-- Forwarded message --
From: Adam Stylinski
Date: Thu, Apr 28, 2011 at 8:38 AM
Subject: Re: em0 performance subpar
To: Eugene Grosbein
On Thu, Apr 28, 2011 at 05:51:58PM +0700, Eugene Grosbein wrote:
> On 28.04.2011 14:29, Adam Stylinski wrote:
> > Hello,
> >
> > I have an inte
On 4/28/2011 3:29 AM, Adam Stylinski wrote:
> Hello,
>
> I have an intel gigabit network adapter (the 1000 GT w/chipset 82541PI) which
> performs poorly in Freebsd compared to the same card in Linux. I've tried
> this card in two different freebsd boxes and for whatever reason I get poor
> tra
On Thu, Apr 28, 2011 at 08:47:38AM -0400, Pierre Lamy wrote:
> Try using netblast on FreeBSD instead of iperf, there have been a lot of
> discussions about this on this list.
>
> Is it possible you're maxing out the system's PCI-xxx bus? Did you tune
> up the system buffers? Data doesn't just ge
On Thu, Apr 28, 2011 at 09:04:24AM -0400, Mike Tancsa wrote:
> On 4/28/2011 3:29 AM, Adam Stylinski wrote:
> > Hello,
> >
> > I have an intel gigabit network adapter (the 1000 GT w/chipset 82541PI)
> > which performs poorly in Freebsd compared to the same card in Linux. I've
> > tried this card
Running em's here we regularly see them hitting pretty much line rate
although there are a lot of different em's
Here we have the following under 8.0+
em0@pci0:6:0:0: class=0x02 card=0x15d9 chip=0x10968086 rev=0x01 hdr=0x00
vendor = 'Intel Corporation'
device = 'Intel PRO/10
On Thu, Apr 28, 2011 at 02:52:59PM +0100, Steven Hartland wrote:
> Running em's here we regularly see them hitting pretty much line rate
> although there are a lot of different em's
>
> Here we have the following under 8.0+
> em0@pci0:6:0:0: class=0x02 card=0x15d9 chip=0x10968086 rev=0x01
On Thu, Apr 28, 2011 at 09:38:37AM -0400, Mike Tancsa wrote:
> On 4/28/2011 9:29 AM, Adam Stylinski wrote:
> > lspci -lvc:
> >
> > em0@pci0:7:5:0: class=0x02 card=0x13768086 chip=0x107c8086 rev=0x05
> > hdr=0x00
> > vendor = 'Intel Corporation'
> > device = 'Gigabit Ethernet C
* What's your traffic like? e.g. http, large tcp files, tiny udp etc...
* Is it all on a local switch, if so what switch?
* Is flow control enabled?
* Are you seeing high interrupts?
* Are you disk bound?
* Are you memory bound?
* Are you cpu bound?
Some basic settings we have here:-
net.inet.tcp
On 4/28/2011 10:15 AM, Adam Stylinski wrote:
>
> em0: port 0xe800-0xe83f
> mem 0xfe9e-0xfe9f,0xfe9c-0xfe9d irq 20 at device 5.0 on pci7
> em0: [FILTER]
I am not sure the newer driver will help performance wise. It might fix
that bug you saw at least. Jack from intel might be a
On Thu, Apr 28, 2011 at 03:25:53PM +0100, Steven Hartland wrote:
> * What's your traffic like? e.g. http, large tcp files, tiny udp etc...
> * Is it all on a local switch, if so what switch?
> * Is flow control enabled?
> * Are you seeing high interrupts?
> * Are you disk bound?
> * Are you memory
On Thu, Apr 28, 2011 at 10:30:31AM -0400, Mike Tancsa wrote:
> On 4/28/2011 10:15 AM, Adam Stylinski wrote:
>
> >
> > em0: port 0xe800-0xe83f
> > mem 0xfe9e-0xfe9f,0xfe9c-0xfe9d irq 20 at device 5.0 on pci7
> > em0: [FILTER]
>
> I am not sure the newer driver will help performa
You said your testing with iperf, what settings are you using?
Flow control is not flowtable no, which could still result in
a switch "issue" if linux and freebsd are setting different
values by default, similarly with duplex / speed, I would recommend
autoneg if your not already using.
Have you
On 4/28/2011 11:01 AM, Adam Stylinski wrote:
>
> ./netblast 192.168.0.121 5001 32768 30
>
> start: 1304002549.184689025
> finish:1304002579.187555311
> send calls:2163162
> send errors: 2095950
> approx send rate: 2240
> approx error rate: 0
>
> ? This outp
-- Forwarded message --
From: Adam Stylinski
Date: Thu, Apr 28, 2011 at 11:21 AM
Subject: Re: em0 performance subpar
To: Steven Hartland
On Thu, Apr 28, 2011 at 04:08:31PM +0100, Steven Hartland wrote:
> You said your testing with iperf, what settings are you using?
>
> Flow con
On Thu, Apr 28, 2011 at 11:21:32AM -0400, Mike Tancsa wrote:
> On 4/28/2011 11:01 AM, Adam Stylinski wrote:
> >
> > ./netblast 192.168.0.121 5001 32768 30
> >
> > start: 1304002549.184689025
> > finish:1304002579.187555311
> > send calls:2163162
> > send errors:
On Thu, Apr 28, 2011 at 11:21:32AM -0400, Mike Tancsa wrote:
> On 4/28/2011 11:01 AM, Adam Stylinski wrote:
> >
> > ./netblast 192.168.0.121 5001 32768 30
> >
> > start: 1304002549.184689025
> > finish:1304002579.187555311
> > send calls:2163162
> > send errors:
Just in case anyone did not notice, this adapter is actually using the
legacy subdevice,
ie lem, there has been little focus on that code, things that are not even
PCI Express
are becoming pretty elderly. Let me look this thread over in a bit more
detail after I
get into the office in a bit...
Ja
On Thu, Apr 28, 2011 at 11:25:11AM -0500, Pierre Lamy wrote:
> Someone mentioned on freebsd-current:
>
> With the 7.2.2 driver you also will use different mbuf pools depending on
> > the MTU you are using. If you use jumbo frames it will use 4K clusters,
> > if you go to 9K jumbos it will use 9K m
On Thu, Apr 28, 2011 at 04:29:42PM +0100, Steven Hartland wrote:
> Try using a large buffer size on iperf and check the flow control options on
> the switch.
>
> - Original Message -
> From: "Adam Stylinski"
> Just using the default settings with iperf. Netblast is giving me similar
>
On 4/28/2011 11:35 AM, Adam Stylinski wrote:
>
> And the rate output of netblast (using as suggested the parameters above):
> 119091
>
> This is about 454mbps. Still way slower than it ought to be.
>
Yes, it should do way better than that. I just tried on a couple of
8.2R boxes, E5320 @ 1.
On Thu, Apr 28, 2011 at 11:51:24AM -0400, Adam Stylinski wrote:
> On Thu, Apr 28, 2011 at 11:25:11AM -0500, Pierre Lamy wrote:
> > Someone mentioned on freebsd-current:
> >
> > With the 7.2.2 driver you also will use different mbuf pools depending on
> > > the MTU you are using. If you use jumbo f
Adam,
The TX ring for the legacy driver is small right now compared to em, try
this experiment,
edit if_lem.c, search for "lem_txd" and change EM_DEFAULT_TXD to 1024, see
what
that does, then 2048.
My real strategy with the legacy code was that it should stable, meaning not
getting
a lot of chang
On Thu, Apr 28, 2011 at 09:52:14AM -0700, Jack Vogel wrote:
> Adam,
>
> The TX ring for the legacy driver is small right now compared to em, try
> this experiment,
> edit if_lem.c, search for "lem_txd" and change EM_DEFAULT_TXD to 1024, see
> what
> that does, then 2048.
>
> My real strategy with
Hello, Freebsd-net.
Does queue/sched masks work with IPv6 addresses? I can not find any
examples for this, all examples are with 32-bit masks only...
--
// Black Lion AKA Lev Serebryakov
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd
My validation engineer set things up on an 8.2 REL system, testing the
equivalent of
HEAD, and he reports performance is fine. This is without any tweaks from
what's
checked in.
Increasing the descriptors to 4K is way overkill and might actually cause
problems,
go back to default.
He has a Linux
On Thu, Apr 28, 2011 at 02:22:29PM -0700, Jack Vogel wrote:
> My validation engineer set things up on an 8.2 REL system, testing the
> equivalent of
> HEAD, and he reports performance is fine. This is without any tweaks from
> what's
> checked in.
>
> Increasing the descriptors to 4K is way overki
On Thu, 28 Apr 2011 03:29:46 -0400, Adam Stylinski wrote:
Hello,
I have an intel gigabit network adapter (the 1000 GT w/chipset
82541PI) which performs poorly in Freebsd compared to the same card
in
Linux. I've tried this card in two different freebsd boxes and for
whatever reason I get poor
Synopsis: [rum] [panic] Enabling rum interface causes panic
State-Changed-From-To: open->closed
State-Changed-By: kevlo
State-Changed-When: Fri Apr 29 06:28:36 UTC 2011
State-Changed-Why:
Committed, thanks!
http://www.freebsd.org/cgi/query-pr.cgi?pr=144642
___
The following reply was made to PR kern/144642; it has been noted by GNATS.
From: dfil...@freebsd.org (dfilter service)
To: bug-follo...@freebsd.org
Cc:
Subject: Re: kern/144642: commit references a PR
Date: Fri, 29 Apr 2011 06:28:45 + (UTC)
Author: kevlo
Date: Fri Apr 29 06:28:29 2011
N
32 matches
Mail list logo