[snip]
So, hm, the thing that comes to mind is the flowid. What's the various
flowid's for flows? Are they all mapping to CPU 3 somehow?
-a
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, se
(Note: This may seem more like a rant than an actual problem report.)
I am on a stable-10ish box with igb0. Workload is mainly inbound nfs
traffic. About 2K connections at any point in time.
device igb # Intel PRO/1000 PCIE Server Gigabit Family
hw.igb.rxd: 4096
hw.igb.txd: 4
Sweet!
I'll ask around and see if anyone netmap clued can review. :)
-a
On 10 April 2014 08:18, Karim Fodil-Lemelin wrote:
> Hi,
>
> By the way this change has opened the gates to greater performance for us
> when using ng_callout() inside nodes. In some cases we see twice as much pps
> since
Hi,
By the way this change has opened the gates to greater performance for
us when using ng_callout() inside nodes. In some cases we see twice as
much pps since packets are direct dispatched instead of being queued in
software interrupts threads (swi*).
Thanks,
Karim
PS: I did file a PR :
>
> Another note related to Q-in-Q.
>
>
> You would probably be better of creating standard vlans for the first vlan
> layer and use ng_vlan for the second++ part of the Q-in-Q on top of the first
> ones.
> This also give better usability and will speedup a bit your times.
>
>
>
So I i
On Tue, Apr 08, 2014 at 01:59:39PM +0900, Yonghyeon PYUN wrote:
> On Mon, Apr 07, 2014 at 08:45:00PM +0200, Frank Volf wrote:
> > Yonghyeon PYUN schreef op 7-4-2014 10:32:
> > >It would be even better to know your network configuration. I'm not
> > >sure why you have to disable VLAN hardware taggi
Another note related to Q-in-Q.
You would probably be better of creating standard vlans for the first vlan
layer and use ng_vlan for the second++ part of the Q-in-Q on top of the
first ones.
This also give better usability and will speedup a bit your times.
On Thu, Apr 10, 2014 at 1:22 PM, Hartm
> On Wed, 9 Apr 2014, Vladislav Prodan wrote:
>
> VP>b) Service bsnmpd started at 12K interfaces, but immediately loaded CPU
> VP>at 80-100%
>
> I could imagine that this is because of the statistics polling. bsnmp
> implements 64-bit interface statistics but we have only 32-bit statistics
On Thu, 10 Apr 2014, Vladislav Prodan wrote:
VP>> On Wed, 9 Apr 2014, Vladislav Prodan wrote:
VP>>
VP>> VP>b) Service bsnmpd started at 12K interfaces, but immediately loaded CPU
VP>> VP>at 80-100%
VP>>
VP>> I could imagine that this is because of the statistics polling. bsnmp
VP>> implements
On Wed, 9 Apr 2014, Vladislav Prodan wrote:
VP>b) Service bsnmpd started at 12K interfaces, but immediately loaded CPU
VP>at 80-100%
I could imagine that this is because of the statistics polling. bsnmp
implements 64-bit interface statistics but we have only 32-bit statistics
in the kernel. So
From experience with large number of interfaces and configuring them.
Its not that the kernel cannot handle it the problem is that you call
generic utilities to do this job.
I.E. to setup an ip on the interface ifconfig has first to get the whole
list of interfaces to determine if that interface e
11 matches
Mail list logo