Re: msk/Yukon issues since 9.0-REL

2012-03-24 Thread YongHyeon PYUN
On Thu, Mar 22, 2012 at 07:08:02PM +, Joe Holden wrote: > Joe Holden wrote: > >YongHyeon PYUN wrote: > >>On Sat, Mar 17, 2012 at 03:18:19PM +, Joe Holden wrote: > >>>Hi guys, > >>> > >>>I've upgraded to 9.0-REL from RC3 (I think) and the previous > >>>workarounds I've used for msk/Yukon II

Re: nmbclusters: how do we want to fix this for 8.3 ?

2012-03-24 Thread Jack Vogel
This whole issue only came up on a system with 10G devices, and only igb does anything like you're talking about, not a device/driver on most low end systems. So, we are trading red herrings it would seem. I'm not opposed to economizing things in a sensible way, it was I that brought the issue up

Re: nmbclusters: how do we want to fix this for 8.3 ?

2012-03-24 Thread Ivan Voras
On 24 March 2012 22:02, Juli Mallett wrote: > If we make it easier to change the > tuning of the system for that scenario, then nobody's going to care > what our defaults are, or think us "slow" for them. Unfortunately, years of past experience goes against this particular argument. There are si

Re: nmbclusters: how do we want to fix this for 8.3 ?

2012-03-24 Thread Juli Mallett
On Sat, Mar 24, 2012 at 13:33, Jack Vogel wrote: > On Sat, Mar 24, 2012 at 1:08 PM, John-Mark Gurney wrote: >> If we had some sort of tuning algorithm that would keep track of the >> current receive queue usage depth, and always keep enough mbufs on the >> queue to handle the largest expected bur

Re: nmbclusters: how do we want to fix this for 8.3 ?

2012-03-24 Thread Jack Vogel
On Sat, Mar 24, 2012 at 1:08 PM, John-Mark Gurney wrote: > Juli Mallett wrote this message on Thu, Feb 23, 2012 at 08:03 -0800: > > Which sounds slightly off-topic, except that dedicating loads of mbufs > > to receive queues that will sit empty on the vast majority of systems > > and receive a fe

Re: nmbclusters: how do we want to fix this for 8.3 ?

2012-03-24 Thread John-Mark Gurney
Juli Mallett wrote this message on Thu, Feb 23, 2012 at 08:03 -0800: > Which sounds slightly off-topic, except that dedicating loads of mbufs > to receive queues that will sit empty on the vast majority of systems > and receive a few packets per second in the service of some kind of > magical think

Re: firewall stuck

2012-03-24 Thread Kevin Oberman
On Sat, Mar 24, 2012 at 6:30 AM, nyoman.b...@gmail.com wrote: > On Thu, Mar 15, 2012 at 11:47 AM, Kevin Oberman wrote: >> >> Please don't top post. It makes following the thread very difficult. >> (Yes, I know too many MUAs make this difficult.) >> >>  > On Wed, Mar 14, 2012 at 1:12 PM, Kevin Obe

Re: kern/166372: [patch] ipfilter drops UDP packets with zero checksum on some interfaces

2012-03-24 Thread linimon
Synopsis: [patch] ipfilter drops UDP packets with zero checksum on some interfaces Responsible-Changed-From-To: freebsd-bugs->freebsd-net Responsible-Changed-By: linimon Responsible-Changed-When: Sat Mar 24 15:14:26 UTC 2012 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org

Re: firewall stuck

2012-03-24 Thread nyoman.b...@gmail.com
On Thu, Mar 15, 2012 at 11:47 AM, Kevin Oberman wrote: > Please don't top post. It makes following the thread very difficult. > (Yes, I know too many MUAs make this difficult.) > > > On Wed, Mar 14, 2012 at 1:12 PM, Kevin Oberman > wrote: > >> > >> On Tue, Mar 13, 2012 at 7:27 PM, nyoman.b...@g

9-STABLE + Infiniband - incorrect interface counters

2012-03-24 Thread Alex Tutubalin
Hi, I'm playing with two FreeBSD 9-STABLE boxes connected via 10Gbps Infiniband (more details below) in Infiniband connected mode. I see incorrect interface statistics (e.g. in netstat output), output counters are 2x more than expected. EXAMPLE, ftp transfer of 1 GiB file: ftp> put file /d

Re: Intel 82550 Pro/100 Ethernet and Microcode

2012-03-24 Thread Andreas Longwitz
YongHyeon PYUN wrote: > I didn't ever try NFS on i82550C. If it can't handle fragmented IP > datagrams, it would also have failed netperf UDP stream test since > all UDP datagrams are fragmented. Yes, you are right. The test needs to run more than 10 seconds to see lost packets. Running netp