Polling should not produce any improvement over interrupts for EM0.
The EM0 card will aggregate 8-14+ packets per interrupt, or more.
which is only around 8000 interrupts/sec. I've got a ton of these
cards installed.
# mount_nfs -a 4 dhcp61:/home /mnt
# dd if=/mnt/x of=/d
"David G. Lawrence" wrote:
>
> > >>tests. With the re driver, no change except placing a 100BT setup with
> > >>no packet loss to a gigE setup (both linksys switches) will cause
> > >>serious packet loss at 20Mbps data rates. I have discovered the only
> > >>way to get good performance with no p
> > >ifnet and netisr queues. You could also try
> setting net.isr.enable=1 to
> > >enable direct dispatch, which in the in-bound
> direction would reduce the
> > >number of context switches and queueing. It
> sounds like the device driver
> > >has a limit of 256 receive and transmit
> descriptor
> >>tests. With the re driver, no change except placing a 100BT setup with
> >>no packet loss to a gigE setup (both linksys switches) will cause
> >>serious packet loss at 20Mbps data rates. I have discovered the only
> >>way to get good performance with no packet loss was to
> >>
> >>1) Remove i
Robert Watson wrote:
On Sun, 21 Nov 2004, Sean McNeil wrote:
I have to disagree. Packet loss is likely according to some of my
tests. With the re driver, no change except placing a 100BT setup with
no packet loss to a gigE setup (both linksys switches) will cause
serious packet loss at 20Mbps dat
> I changed cables and couldn't reproduce that bad results so I changed cables
> back but also cannot reproduce them, especially the ggate write, formerly
> with 2,6MB/s now performs at 15MB/s, but I haven't done any polling tests
> anymore, just interrupt driven, since Matt explained that em do
In message:
"Daniel Eriksson" <[EMAIL PROTECTED]> writes:
: Finally, my question. What would you recommend:
: 1) Run with ACPI disabled and debug.mpsafenet=1 and hope that the mix of
: giant-safe and giant-locked (em and ahc) doesn't trigger any bugs. This is
: what I currently do.
:
Am Freitag, 19. November 2004 13:56 schrieb Robert Watson:
> On Fri, 19 Nov 2004, Emanuel Strobl wrote:
> > Am Donnerstag, 18. November 2004 13:27 schrieb Robert Watson:
> > > On Wed, 17 Nov 2004, Emanuel Strobl wrote:
> > > > I really love 5.3 in many ways but here're some unbelievable transfer
[.
On Fri, 19 Nov 2004, Emanuel Strobl wrote:
> Am Donnerstag, 18. November 2004 13:27 schrieb Robert Watson:
> > On Wed, 17 Nov 2004, Emanuel Strobl wrote:
> > > I really love 5.3 in many ways but here're some unbelievable transfer
>
> First, thanks a lot to all of you paying attention to my probl
Am Donnerstag, 18. November 2004 13:27 schrieb Robert Watson:
> On Wed, 17 Nov 2004, Emanuel Strobl wrote:
> > I really love 5.3 in many ways but here're some unbelievable transfer
First, thanks a lot to all of you paying attention to my problem again.
I'll use this as a cumulative answer to the m
> Hi, Jeremie, how is this?
> To disable Interrupt Moderation, sysctl hw.em?.int_throttle_valve=0.
Great, I would have called it "int_throttle_ceil", but that's a detail
and my opinion is totally subjective.
> However, because this patch is just made now, it is not fully tested.
I'll give it
> if you suppose your computer has sufficient performance, please try to
> disable or adjust parameters of Interrupt Moderation of em.
Nice ! It would be even better if there was a boot-time sysctl to
configure the behaviour of this feature, or something like ifconfig
link0 option of the fxp(4) d
Polling should not produce any improvement over interrupts for EM0.
The EM0 card will aggregate 8-14+ packets per interrupt, or more.
which is only around 8000 interrupts/sec. I've got a ton of these
cards installed.
# mount_nfs -a 4 dhcp61:/home /mnt
# dd if=/mnt/x of=/d
On Thu, 18 Nov 2004, Daniel Eriksson wrote:
> I have a Tyan Tiger MPX board (dual AthlonMP) that has two 64bit PCI
> slots. I have an Adaptec 29160 and a dual port Intel Pro/1000 MT
> plugged into those slots.
>
> As can be seen from the vmstat -i output below, em1 shares ithread with
> ahc0.
M. Warner Losh wrote:
> Also, make sure that you aren't sharing interrupts between
> GIANT-LOCKED and non-giant-locked cards. This might be exposing bugs
> in the network layer that debug.mpsafenet=0 might correct. Just
> noticed that our setup here has that setup, so I'll be looking into
> that
In message: <[EMAIL PROTECTED]>
Robert Watson <[EMAIL PROTECTED]> writes:
: (1) I'd first off check that there wasn't a serious interrupt problem on
: the box, which is often triggered by ACPI problems. Get the box to be
: as idle as possible, and then use vmstat -i or stat -vm
Andreas Braukmann said:
> --On Mittwoch, 17. November 2004 20:48 Uhr -0500 Mike Jakubik
> <[EMAIL PROTECTED]> wrote:
>
>> I have two PCs connected together, using the em card. One is FreeBSD 6
>> from Fri Nov 5 , the other is Windows XP. I am using the default mtu of
>> 1500, no polling, and i get
On Thu, 18 Nov 2004, Wilko Bulte wrote:
> On Thu, Nov 18, 2004 at 12:27:44PM +, Robert Watson wrote..
> >
> > On Wed, 17 Nov 2004, Emanuel Strobl wrote:
> >
> > > I really love 5.3 in many ways but here're some unbelievable transfer
> > > rates, after I went out and bought a pair of Intel G
On Thu, Nov 18, 2004 at 12:27:44PM +, Robert Watson wrote..
>
> On Wed, 17 Nov 2004, Emanuel Strobl wrote:
>
> > I really love 5.3 in many ways but here're some unbelievable transfer
> > rates, after I went out and bought a pair of Intel GigaBit Ethernet
> > Cards to solve my performance prob
On Thu, 18 Nov 2004, Pawel Jakub Dawidek wrote:
> On Wed, Nov 17, 2004 at 11:57:41PM +0100, Emanuel Strobl wrote:
> +> Dear best guys,
> +>
> +> I really love 5.3 in many ways but here're some unbelievable transfer
> rates,
> +> after I went out and bought a pair of Intel GigaBit Ethernet Card
On Wed, 17 Nov 2004, Emanuel Strobl wrote:
> I really love 5.3 in many ways but here're some unbelievable transfer
> rates, after I went out and bought a pair of Intel GigaBit Ethernet
> Cards to solve my performance problem (*laugh*):
I think the first thing you want to do is to try and determ
On Wed, Nov 17, 2004 at 11:57:41PM +0100, Emanuel Strobl wrote:
+> Dear best guys,
+>
+> I really love 5.3 in many ways but here're some unbelievable transfer rates,
+> after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve
+> my performance problem (*laugh*):
[...]
I done
--On Mittwoch, 17. November 2004 20:48 Uhr -0500 Mike Jakubik <[EMAIL
PROTECTED]> wrote:
I have two PCs connected together, using the em card. One is FreeBSD 6
from Fri Nov 5 , the other is Windows XP. I am using the default mtu of
1500, no polling, and i get ~ 21MB/s tranfser rates via ftp. Im s
Emanuel Strobl said:
~ 15MB/s
> .and with 1m blocksize:
> test2:~#17: dd if=/dev/zero of=/samsung/testfile bs=1m
> ^C61+0 records in
> 60+0 records out
> 62914560 bytes transferred in 4.608726 secs (13651182 bytes/sec)
> ->
Am Donnerstag, 18. November 2004 01:01 schrieb Chuck Swiger:
> Emanuel Strobl wrote:
> [ ... ]
>
> > Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit PCI
> > Desktop adapter MT) connected directly without a switch/hub
>
> If filesharing via NFS is your primary goal, it's reason
ping only tests latency *not* throughput. So it is not really a good test.
- aW
0n Wed, Nov 17, 2004 at 07:01:24PM -0500, Chuck Swiger wrote:
Emanuel Strobl wrote:
[ ... ]
>Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit
PCI
>D
Emanuel Strobl wrote:
[ ... ]
Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit PCI
Desktop adapter MT) connected directly without a switch/hub
If filesharing via NFS is your primary goal, it's reasonable to test that,
however it would be easier to make sense of your results b
Am Donnerstag, 18. November 2004 00:33 schrieb Scott Long:
> Emanuel Strobl wrote:
> > Dear best guys,
> >
> > I really love 5.3 in many ways but here're some unbelievable transfer
> > rates, after I went out and bought a pair of Intel GigaBit Ethernet Cards
> > to solve my performance problem (*la
Emanuel Strobl wrote:
Dear best guys,
I really love 5.3 in many ways but here're some unbelievable transfer rates,
after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve
my performance problem (*laugh*):
(In short, see *** below)
Tests were done with two Intel GigaBit Ethern
Am Donnerstag, 18. November 2004 00:17 schrieb Sean McNeil:
> On Wed, 2004-11-17 at 23:57 +0100, Emanuel Strobl wrote:
> > Dear best guys,
> >
> > I really love 5.3 in many ways but here're some unbelievable transfer
> > rates, after I went out and bought a pair of Intel GigaBit Ethernet Cards
> >
On Wed, 2004-11-17 at 23:57 +0100, Emanuel Strobl wrote:
> Dear best guys,
>
> I really love 5.3 in many ways but here're some unbelievable transfer rates,
> after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve
> my performance problem (*laugh*):
>
> (In short, see *** be
31 matches
Mail list logo