Re: Frequent hickups on the networking layer

2015-05-09 Thread Mark Schouten
Hi, Yes, it did. I see no mbuf errors anymore, no Ethernet errors. Ctld does not crash anymore, it kept running after lowering the mtu to 1500. I am using vlans and the weirdest thing when lowering the mtu was that everything went crazy when I only lowered the mtu for the vlan interface. Ctld

Re: Frequent hickups on the networking layer

2015-05-09 Thread Christopher Forgeron
Mark, did switching to a MTU of 1500 ever help? I'm currently reliving a problem with this - I'm down to a MTU of 4000, but I still see jumbo pages being allocated - I believe it's my iSCSI setup (using 4k block size, which means the packet is bigger than 4k), but I'm not sure where it's all comin

Re: Frequent hickups on the networking layer

2015-05-06 Thread Mark Schouten
Hi, On 04/29/2015 04:06 PM, Garrett Wollman wrote: If you're using one of the drivers that has this problem, then yes, keeping your layer-2 MTU/MRU below 4096 will probably cause it to use 4k (page-sized) clusters instead, which are perfectly safe. As a side note, at least on the hardware I ha

Re: Frequent hickups on the networking layer

2015-04-29 Thread Garrett Wollman
< said: > I'm not really (or really not) comfortable with hacking and recompiling > stuff. I'd rather not change anything in the kernel. So would it help in > my case to lower my MTU from 9000 to 4000? If I understand correctly, > this would need to allocate chunks of 4k, which is far more logi

Re: Frequent hickups on the networking layer

2015-04-29 Thread Garrett Wollman
< said: > - as you said, like ~ 64k), and allocate that way. That way there's no > fragmentation to worry about - everything's just using a custom slab > allocator for these large allocation sizes. > It's kind of tempting to suggest freebsd support such a thing, as I > can see increasing requirem

Re: Frequent hickups on the networking layer

2015-04-29 Thread Rick Macklem
Paul Thornton wrote: > Hi, > > On 28/04/2015 22:06, Rick Macklem wrote: > > ... If your > > net device driver is one that allocates 9K jumbo mbufs for receive > > instead of using a list of smaller mbuf clusters, I'd guess this is > > what is biting you. > > Apologies for the thread drift, but is

Re: Frequent hickups on the networking layer

2015-04-29 Thread Paul Thornton
Hi, On 28/04/2015 22:06, Rick Macklem wrote: ... If your net device driver is one that allocates 9K jumbo mbufs for receive instead of using a list of smaller mbuf clusters, I'd guess this is what is biting you. Apologies for the thread drift, but is there a list anywhere of what drivers migh

Re: Frequent hickups on the networking layer

2015-04-29 Thread Mark Schouten
Hi, On 04/28/2015 11:06 PM, Rick Macklem wrote: There have been email list threads discussing how allocating 9K jumbo mbufs will fragment the KVM (kernel virtual memory) used for mbuf cluster allocation and cause grief. If your net device driver is one that allocates 9K jumbo mbufs for receive i

Re: Frequent hickups on the networking layer

2015-04-28 Thread Adrian Chadd
I've spoken to more than one company about this stuff and their answers are all the same: "we ignore the freebsd allocator, allocate a very large chunk of memory at boot, tell the VM it plainly just doesn't exist, and abuse it via the direct map." That gets around a lot of things, including the "

Re: Frequent hickups on the networking layer

2015-04-28 Thread John-Mark Gurney
Navdeep Parhar wrote this message on Tue, Apr 28, 2015 at 22:16 -0700: > On Wed, Apr 29, 2015 at 01:08:00AM -0400, Garrett Wollman wrote: > > < > said: > ... > > > As far as I know (just from email discussion, never used them myself), > > > you can either stop using jumbo packets or switch to a di

Re: Frequent hickups on the networking layer

2015-04-28 Thread Navdeep Parhar
On Wed, Apr 29, 2015 at 01:08:00AM -0400, Garrett Wollman wrote: > < said: ... > > As far as I know (just from email discussion, never used them myself), > > you can either stop using jumbo packets or switch to a different net > > interface that doesn't allocate 9K jumbo mbufs (doing the receives

Re: Frequent hickups on the networking layer

2015-04-28 Thread Garrett Wollman
< said: > There have been email list threads discussing how allocating 9K jumbo > mbufs will fragment the KVM (kernel virtual memory) used for mbuf > cluster allocation and cause grief. The problem is not KVA fragmentation -- the clusters come from a separate map which should prevent that -- it'

Re: Frequent hickups on the networking layer

2015-04-28 Thread Rick Macklem
Mark Schouten wrote: > Hi, > > > I've got a FreeBSD 10.1-RELEASE box running with iscsi on top of ZFS. > I've had some major issues with it where it would stop processing > traffic for a minute or two, but that's 'fixed' by disabling TSO. I > do have frequent iscsi errors, which are luckily fixed

Re: Frequent hickups on the networking layer

2015-04-28 Thread Mark Schouten
outen , "freebsd-net@FreeBSD.org" Verzonden: 28-4-2015 13:58 Onderwerp: RE: Frequent hickups on the networking layer What network care are you using? There have been a few reports of issues with TSO, if you check the list. I had some myself a while ago, but are now resolved

Frequent hickups on the networking layer

2015-04-28 Thread Mark Schouten
Hi, I've got a FreeBSD 10.1-RELEASE box running with iscsi on top of ZFS. I've had some major issues with it where it would stop processing traffic for a minute or two, but that's 'fixed' by disabling TSO. I do have frequent iscsi errors, which are luckily fixed on the iscsi layer, but they do