On Thu, Apr 28, 2011 at 09:41:01PM -0700, Jack Vogel wrote:
> We rarely test 32bit any more, the only time we would is because of a
> problem,
> so 99% is amd64, so that's not the problem.
>
> Running netperf without any special arguments, using TCP_STREAM and
> TCP_MAERTS tests what numbers are y
On Thu, 28 Apr 2011 03:29:46 -0400, Adam Stylinski wrote:
Hello,
I have an intel gigabit network adapter (the 1000 GT w/chipset
82541PI) which performs poorly in Freebsd compared to the same card
in
Linux. I've tried this card in two different freebsd boxes and for
whatever reason I get poor
On Thu, Apr 28, 2011 at 02:22:29PM -0700, Jack Vogel wrote:
> My validation engineer set things up on an 8.2 REL system, testing the
> equivalent of
> HEAD, and he reports performance is fine. This is without any tweaks from
> what's
> checked in.
>
> Increasing the descriptors to 4K is way overki
My validation engineer set things up on an 8.2 REL system, testing the
equivalent of
HEAD, and he reports performance is fine. This is without any tweaks from
what's
checked in.
Increasing the descriptors to 4K is way overkill and might actually cause
problems,
go back to default.
He has a Linux
On Thu, Apr 28, 2011 at 09:52:14AM -0700, Jack Vogel wrote:
> Adam,
>
> The TX ring for the legacy driver is small right now compared to em, try
> this experiment,
> edit if_lem.c, search for "lem_txd" and change EM_DEFAULT_TXD to 1024, see
> what
> that does, then 2048.
>
> My real strategy with
Adam,
The TX ring for the legacy driver is small right now compared to em, try
this experiment,
edit if_lem.c, search for "lem_txd" and change EM_DEFAULT_TXD to 1024, see
what
that does, then 2048.
My real strategy with the legacy code was that it should stable, meaning not
getting
a lot of chang
On Thu, Apr 28, 2011 at 11:51:24AM -0400, Adam Stylinski wrote:
> On Thu, Apr 28, 2011 at 11:25:11AM -0500, Pierre Lamy wrote:
> > Someone mentioned on freebsd-current:
> >
> > With the 7.2.2 driver you also will use different mbuf pools depending on
> > > the MTU you are using. If you use jumbo f
On 4/28/2011 11:35 AM, Adam Stylinski wrote:
>
> And the rate output of netblast (using as suggested the parameters above):
> 119091
>
> This is about 454mbps. Still way slower than it ought to be.
>
Yes, it should do way better than that. I just tried on a couple of
8.2R boxes, E5320 @ 1.
On Thu, Apr 28, 2011 at 04:29:42PM +0100, Steven Hartland wrote:
> Try using a large buffer size on iperf and check the flow control options on
> the switch.
>
> - Original Message -
> From: "Adam Stylinski"
> Just using the default settings with iperf. Netblast is giving me similar
>
On Thu, Apr 28, 2011 at 11:25:11AM -0500, Pierre Lamy wrote:
> Someone mentioned on freebsd-current:
>
> With the 7.2.2 driver you also will use different mbuf pools depending on
> > the MTU you are using. If you use jumbo frames it will use 4K clusters,
> > if you go to 9K jumbos it will use 9K m
Just in case anyone did not notice, this adapter is actually using the
legacy subdevice,
ie lem, there has been little focus on that code, things that are not even
PCI Express
are becoming pretty elderly. Let me look this thread over in a bit more
detail after I
get into the office in a bit...
Ja
On Thu, Apr 28, 2011 at 11:21:32AM -0400, Mike Tancsa wrote:
> On 4/28/2011 11:01 AM, Adam Stylinski wrote:
> >
> > ./netblast 192.168.0.121 5001 32768 30
> >
> > start: 1304002549.184689025
> > finish:1304002579.187555311
> > send calls:2163162
> > send errors:
On Thu, Apr 28, 2011 at 11:21:32AM -0400, Mike Tancsa wrote:
> On 4/28/2011 11:01 AM, Adam Stylinski wrote:
> >
> > ./netblast 192.168.0.121 5001 32768 30
> >
> > start: 1304002549.184689025
> > finish:1304002579.187555311
> > send calls:2163162
> > send errors:
-- Forwarded message --
From: Adam Stylinski
Date: Thu, Apr 28, 2011 at 11:21 AM
Subject: Re: em0 performance subpar
To: Steven Hartland
On Thu, Apr 28, 2011 at 04:08:31PM +0100, Steven Hartland wrote:
> You said your testing with iperf, what settings are you using?
>
On 4/28/2011 11:01 AM, Adam Stylinski wrote:
>
> ./netblast 192.168.0.121 5001 32768 30
>
> start: 1304002549.184689025
> finish:1304002579.187555311
> send calls:2163162
> send errors: 2095950
> approx send rate: 2240
> approx error rate: 0
>
> ? This outp
ing.
Have you tried without tso, rxcsum, txcsum & lro disabled?
- Original Message -
From: "Adam Stylinski"
To: "Steven Hartland"
Cc:
Sent: Thursday, April 28, 2011 3:45 PM
Subject: Re: em0 performance subpar
I was using the default value for maxsockbuf, doesn
On Thu, Apr 28, 2011 at 10:30:31AM -0400, Mike Tancsa wrote:
> On 4/28/2011 10:15 AM, Adam Stylinski wrote:
>
> >
> > em0: port 0xe800-0xe83f
> > mem 0xfe9e-0xfe9f,0xfe9c-0xfe9d irq 20 at device 5.0 on pci7
> > em0: [FILTER]
>
> I am not sure the newer driver will help performa
On Thu, Apr 28, 2011 at 03:25:53PM +0100, Steven Hartland wrote:
> * What's your traffic like? e.g. http, large tcp files, tiny udp etc...
> * Is it all on a local switch, if so what switch?
> * Is flow control enabled?
> * Are you seeing high interrupts?
> * Are you disk bound?
> * Are you memory
On 4/28/2011 10:15 AM, Adam Stylinski wrote:
>
> em0: port 0xe800-0xe83f
> mem 0xfe9e-0xfe9f,0xfe9c-0xfe9d irq 20 at device 5.0 on pci7
> em0: [FILTER]
I am not sure the newer driver will help performance wise. It might fix
that bug you saw at least. Jack from intel might be a
* What's your traffic like? e.g. http, large tcp files, tiny udp etc...
* Is it all on a local switch, if so what switch?
* Is flow control enabled?
* Are you seeing high interrupts?
* Are you disk bound?
* Are you memory bound?
* Are you cpu bound?
Some basic settings we have here:-
net.inet.tcp
On Thu, Apr 28, 2011 at 09:38:37AM -0400, Mike Tancsa wrote:
> On 4/28/2011 9:29 AM, Adam Stylinski wrote:
> > lspci -lvc:
> >
> > em0@pci0:7:5:0: class=0x02 card=0x13768086 chip=0x107c8086 rev=0x05
> > hdr=0x00
> > vendor = 'Intel Corporation'
> > device = 'Gigabit Ethernet C
Regards
> Steve
>
> - Original Message -
> From: "Mike Tancsa"
> To: "Adam Stylinski"
> Cc:
> Sent: Thursday, April 28, 2011 2:04 PM
> Subject: Re: em0 performance subpar
>
>
> > On 4/28/2011 3:29 AM, Adam Stylinski wrote:
ice = 'Intel PRO/1000 EB (Intel PRO/1000 EB)'
class = network
subclass = ethernet
You don't say which OS version your running?
Regards
Steve
- Original Message -
From: "Mike Tancsa"
To: "Adam Stylinski"
Cc:
Sent: Thursday, Apr
On Thu, Apr 28, 2011 at 09:04:24AM -0400, Mike Tancsa wrote:
> On 4/28/2011 3:29 AM, Adam Stylinski wrote:
> > Hello,
> >
> > I have an intel gigabit network adapter (the 1000 GT w/chipset 82541PI)
> > which performs poorly in Freebsd compared to the same card in Linux. I've
> > tried this card
On Thu, Apr 28, 2011 at 08:47:38AM -0400, Pierre Lamy wrote:
> Try using netblast on FreeBSD instead of iperf, there have been a lot of
> discussions about this on this list.
>
> Is it possible you're maxing out the system's PCI-xxx bus? Did you tune
> up the system buffers? Data doesn't just ge
On 4/28/2011 3:29 AM, Adam Stylinski wrote:
> Hello,
>
> I have an intel gigabit network adapter (the 1000 GT w/chipset 82541PI) which
> performs poorly in Freebsd compared to the same card in Linux. I've tried
> this card in two different freebsd boxes and for whatever reason I get poor
> tra
-- Forwarded message --
From: Adam Stylinski
Date: Thu, Apr 28, 2011 at 8:38 AM
Subject: Re: em0 performance subpar
To: Eugene Grosbein
On Thu, Apr 28, 2011 at 05:51:58PM +0700, Eugene Grosbein wrote:
> On 28.04.2011 14:29, Adam Stylinski wrote:
> > Hello,
> >
On 28.04.2011 14:29, Adam Stylinski wrote:
> Hello,
>
> I have an intel gigabit network adapter (the 1000 GT w/chipset 82541PI) which
> performs poorly in Freebsd compared to the same card in Linux. I've tried
> this card in two different freebsd boxes and for whatever reason I get poor
> tran
On Thu, Apr 28, 2011 at 2:29 AM, Adam Stylinski wrote:
> Hello,
>
> I have an intel gigabit network adapter (the 1000 GT w/chipset 82541PI)
> which performs poorly in Freebsd compared to the same card in Linux. I've
> tried this card in two different freebsd boxes and for whatever reason I get
>
Hello,
I have an intel gigabit network adapter (the 1000 GT w/chipset 82541PI) which
performs poorly in Freebsd compared to the same card in Linux. I've tried this
card in two different freebsd boxes and for whatever reason I get poor transmit
performance. I've done all of the tweaking specif
30 matches
Mail list logo