On Thu, Mar 7, 2013 at 11:54 PM, YongHyeon PYUN wrote:
> On Fri, Mar 08, 2013 at 02:10:41AM -0500, Garrett Wollman wrote:
> > I have a machine (actually six of them) with an Intel dual-10G NIC on
> > the motherboard. Two of them (so far) are connected to a network
> > using jumbo frames, with an
On Thu, Mar 7, 2013 at 11:54 PM, Andre Oppermann wrote:
> On 08.03.2013 08:10, Garrett Wollman wrote:
>
>> I have a machine (actually six of them) with an Intel dual-10G NIC on
>> the motherboard. Two of them (so far) are connected to a network
>> using jumbo frames, with an MTU a little under 9
On Fri, Mar 08, 2013 at 12:27:37AM -0800, Jack Vogel wrote:
> On Thu, Mar 7, 2013 at 11:54 PM, YongHyeon PYUN wrote:
>
> > On Fri, Mar 08, 2013 at 02:10:41AM -0500, Garrett Wollman wrote:
> > > I have a machine (actually six of them) with an Intel dual-10G NIC on
> > > the motherboard. Two of th
On Thu, Mar 7, 2013 at 2:51 PM, Andre Oppermann wrote:
> On 07.03.2013 14:38, Ermal Luçi wrote:
>
>> On Thu, Mar 7, 2013 at 12:55 PM, Andre Oppermann > an...@freebsd.org>> wrote:
>>
>> On 07.03.2013 12:43, Alexander V. Chernikov wrote:
>>
>> On 07.03.2013 11:39, Andre Oppermann wrote:
Hi guys. I'm diggin some bpf stuff and i can't figure out, why there are 3
types of data representations: words, halfwords and bytes? I mean how can i
know, which one is best in a place to use? In some basic example, e.g. for
packet capturing, considering BPF's manual, i use for ETHERTYPE in the
et
Hi guys. I'm diggin some bpf stuff and i can't figure out, why there are 3
types of data representations: words, halfwords and bytes? I mean how can i
know, which one is best in a place to use? In some basic example, e.g. for
packet capturing, considering BPF's manual, i use for ETHERTYPE in the
et
On 07.03.2013 17:55, freebsd-net wrote:
Greetings Maciej Milewski, and thank you for your thoughtful reply.
On 06.03.2013 22:02, freebsd-net wrote:
Greetings,
I'm evaluating an ISP for the sake of building BSD operating systems on
hardware
that they use (DSL modems, in this case). When I ha
Hello there!
In my enviroment, where I use FreeBSD machines as loadbalancers, after a server
is detected as dead, loadbalancer removes the the broken server from a table
used in route-to pf rule and then removes Source entries pointing clients to
that server, so clients previously assigned to t
Maciej Milewski, and thank you for your reply.
> On 07.03.2013 17:55, freebsd-net wrote:
>> Greetings Maciej Milewski, and thank you for your thoughtful reply.
>>> On 06.03.2013 22:02, freebsd-net wrote:
Greetings,
I'm evaluating an ISP for the sake of building BSD operating systems on
On 08.03.2013 16:06, freebsd-net wrote:
While I agree, inserting a router/switch between the modem & the clients/servers
would be the shortest/easiest solution. In the end, I think the investment in
building a (free)bsd kernel && drivers for the modem would have/provide the
biggest reward(s). Tru
On 08.03.2013 01:42, John-Mark Gurney wrote:
Andre Oppermann wrote this message on Thu, Mar 07, 2013 at 08:39 +0100:
Adding interface address is handled via atomically deleting old prefix and
adding interface one.
This brings up a long standing sore point of our routing code
which this patch m
< said:
> I am not strongly opposed to trying the 4k mbuf pool for all larger sizes,
> Garrett maybe if you would try that on your system and see if that helps
> you, I could envision making this a tunable at some point perhaps?
If you can provide a patch I can certainly build it in to our kernel
< said:
> [stuff I wrote deleted]
> You have an amd64 kernel running HEAD or 9.x?
Yes, these are 9.1 with some patches to reduce mutex contention on the
NFS server's replay "cache".
> Jumbo pages come directly from the kernel_map which on amd64 is 512GB.
> So KVA shouldn't be a problem. Your pr
On 08.03.2013 18:04, Garrett Wollman wrote:
< said:
I am not strongly opposed to trying the 4k mbuf pool for all larger sizes,
Garrett maybe if you would try that on your system and see if that helps
you, I could envision making this a tunable at some point perhaps?
If you can provide a patch
Hello list@,
I'm mostly active on OpenBSD-side, however I have several machines running fbsd
with ZFS.
I'v recently upgraded(today) from 8.2-stable to 9.1-rel because of em(4) with
the problem on 8.2-stable.
However, my problem has not disappeared after mentioned upgrade.
I serve VMWare ima
The message occurs because you don't have enough mbufs to setup the RX
ring, so you
need to look at nmbclusters. It may be that em is just the victim, since
you have igb interfaces
as well from what I see.
Jack
On Fri, Mar 8, 2013 at 11:19 AM, mxb wrote:
>
> Hello list@,
>
> I'm mostly active
Is this FreeBSD 9.x or HEAD?
On Fri, Mar 8, 2013 at 2:19 PM, Kajetan Staszkiewicz
wrote:
> Hello there!
>
> In my enviroment, where I use FreeBSD machines as loadbalancers, after a
> server
> is detected as dead, loadbalancer removes the the broken server from a
> table
> used in route-to pf ru
Yes, in the past the code was in this form, it should work fine Garrett,
just make sure
the 4K pool is large enough.
I've actually been thinking about making the ring mbuf allocation sparse,
and what type
of strategy could be used. Right now I'm thinking of implementing a tunable
threshold,
and as
Any sysctl I'd should look out for?
//maxim
On 8 mar 2013, at 21:03, Jack Vogel wrote:
> The message occurs because you don't have enough mbufs to setup the RX
> ring, so you
> need to look at nmbclusters. It may be that em is just the victim, since
> you have igb interfaces
> as well from w
kern.ipc.nmbclusters
Jeff
-Original Message-
From: owner-freebsd-...@freebsd.org [mailto:owner-freebsd-...@freebsd.org] On
Behalf Of mxb
Sent: Friday, March 08, 2013 12:17 PM
To: Jack Vogel
Cc: freebsd-net@freebsd.org; mxb
Subject: Re: 9.1-RELEASE-p1: em0: Could not setup receive structu
< said:
> Yes, in the past the code was in this form, it should work fine Garrett,
> just make sure
> the 4K pool is large enough.
I take it then that the hardware works in the traditional way, and
just keeps on using buffers until the packet is completely written,
then sets a field on the ring d
Yes, the write-back descriptor has a bit in the status field that says its
EOP (end of packet)
or not.
Jack
On Fri, Mar 8, 2013 at 12:28 PM, Garrett Wollman wrote:
> < said:
>
> > Yes, in the past the code was in this form, it should work fine Garrett,
> > just make sure
> > the 4K pool is larg
Synopsis: ifconfig(8): ioctl (SIOCAIFADDR): File exists on directly connected
networks
Responsible-Changed-From-To: freebsd-net->melifaro
Responsible-Changed-By: melifaro
Responsible-Changed-When: Fri Mar 8 20:45:18 UTC 2013
Responsible-Changed-Why:
Take
http://www.freebsd.org/cgi/query-pr.cgi?
Dnia piątek, 8 marca 2013 o 21:11:43 Ermal Luçi napisał(a):
> Is this FreeBSD 9.x or HEAD?
I found the problem and developed the patch on 9.1.
--
| pozdrawiam / greetings | powered by Debian, CentOS and FreeBSD |
| Kajetan Staszkiewicz | jabber,email: vegeta()tuxpowered net |
|Vegeta
Old Synopsis: [net] [if_bridge] use-after-free in if_bridge
New Synopsis: [net] [if_bridge] [patch] use-after-free in if_bridge
Responsible-Changed-From-To: freebsd-bugs->freebsd-net
Responsible-Changed-By: linimon
Responsible-Changed-When: Fri Mar 8 23:13:52 UTC 2013
Responsible-Changed-Why:
Ove
Garrett Wollman wrote:
> < said:
>
> > [stuff I wrote deleted]
> > You have an amd64 kernel running HEAD or 9.x?
>
> Yes, these are 9.1 with some patches to reduce mutex contention on the
> NFS server's replay "cache".
>
The cached replies are copies of the mbuf list done via m_copym().
As such
<
said:
> If reducing the size to 4K doesn't fix the problem, you might want to
> consider shrinking the tunable vfs.nfsd.tcphighwater and suffering
> the increased CPU overhead (and some increased mutex contention) of
> calling nfsrv_trimcache() more frequently.
Can't do that -- the system beco
<
said:
> The cached replies are copies of the mbuf list done via m_copym().
> As such, the clusters in these replies won't be free'd (ref cnt -> 0)
> until the cache is trimmed (nfsrv_trimcache() gets called after the
> TCP layer has received an ACK for receipt of the reply from the client).
I
28 matches
Mail list logo