On Dec 27, 2007 2:26 AM, Julian Elischer <[EMAIL PROTECTED]> wrote:
> Resending as my mailer made a dog's breakfast of the first one
> with all sorts of wierd line breaks... hopefully this will be better.
> (I haven't sent it yet so I'm hoping)..
>
>
> ---
>
On Wed, Dec 26, 2007 at 02:24:44PM -0500, Sten Daniel Soersdal said:
> Jean-Claude MICHOT wrote:
> >The server is a DELL PowerEdge 860 freshly installed with
> >FreeBSD 7.0-BETA4 (GENERIC Kernel).
> >
> >There's no problem with input throughput (upto 980 Mbits) but output
> >throughput never go up
Resending as my mailer made a dog's breakfast of the first one
with all sorts of wierd line breaks... hopefully this will be better.
(I haven't sent it yet so I'm hoping)..
---
On thing where FreeBSD has been falling behind, and which by chance I
have s
On thing where FreeBSD has been falling behind, and which by chance I
have some time to work on is "policy based routing", which allows different
packet streams to be routed by more than just the destination address.
Constraints:
I want to make some form of this available in the 6.x
On Dec 26, 2007 8:10 AM, Nash Nipples <[EMAIL PROTECTED]> wrote:
> Dear Jordi,
>
> In theory, on a Gigabit link you get 1 000 000 000 bits * second.
> By default you have the MTU set to 1500 bytes which makes ~12 000 bits.
> 1 000 000 000 / 12 000 = ~ 83 333 packets per second.
> 83 333 packets per
Jean-Claude MICHOT wrote:
The server is a DELL PowerEdge 860 freshly installed with
FreeBSD 7.0-BETA4 (GENERIC Kernel).
pciconf and part of boot information:
[EMAIL PROTECTED]:4:0:0:class=0x02 card=0x01e61028 chip=0x165914e4
rev=0x11 hdr=0x00
vendor = 'Broadcom Corporation'
The server is a DELL PowerEdge 860 freshly installed with
FreeBSD 7.0-BETA4 (GENERIC Kernel).
pciconf and part of boot information:
[EMAIL PROTECTED]:4:0:0:class=0x02 card=0x01e61028 chip=0x165914e4
rev=0x11 hdr=0x00
vendor = 'Broadcom Corporation'
device = 'BCM5721 N
Synopsis: [netipsec] [patch] enc(4) and dummynet together produce kernel panics
Responsible-Changed-From-To: freebsd-net->thompsa
Responsible-Changed-By: thompsa
Responsible-Changed-When: Wed Dec 26 17:02:25 UTC 2007
Responsible-Changed-Why:
I'll grab this one. I was forwarded the local patch fro
Dear Jordi,
In theory, on a Gigabit link you get 1 000 000 000 bits * second.
By default you have the MTU set to 1500 bytes which makes ~12 000 bits.
1 000 000 000 / 12 000 = ~ 83 333 packets per second.
83 333 packets per second makes 0.08 packets per microsecond.
1 / 0.08333 = 12.0 microseco
Mark Fullmer wrote:
On Dec 25, 2007, at 12:27 AM, Kostik Belousov wrote:
What fs do you use ? If FFS, are softupdates turned on ? Please, show the
total time spent in the softdepflush process.
Also, try to add the FULL_PREEMPTION kernel config option and report
whether it helps.
FFS with s
On Dec 25, 2007, at 12:27 AM, Kostik Belousov wrote:
What fs do you use ? If FFS, are softupdates turned on ? Please,
show the
total time spent in the softdepflush process.
Also, try to add the FULL_PREEMPTION kernel config option and report
whether it helps.
FFS with soft updates on all
Synopsis: [netipsec] [patch] enc(4) and dummynet together produce kernel panics
Responsible-Changed-From-To: freebsd-bugs->freebsd-net
Responsible-Changed-By: linimon
Responsible-Changed-When: Wed Dec 26 12:04:44 UTC 2007
Responsible-Changed-Why:
Over to maintainer(s).
http://www.freebsd.org/cgi
Hi,
Jordi Espasa Clofent wrote:
I want to say that I'm don't know if 8000 irq per second means a high
IRQ use or a lower user.
I must say, that I did not do hardware since some time. But 10 000
Interrupts per second is not this high. Modern CPUs should be able to
handle much much more.
S
Hi,
I think this is a really good question.
I'm curious since we use a lot of stripped-down FreeBSD for modest
performance routers.
We typically enabling our interfaces with POLLING not so much for
performance (it seems to be a negligible improvement nowadays) but so
that we know that ou
OK, I'll try to explain in another way.
While I've done network performance test I've monitored the IRQ rate,
and, for example, it's a 7000/8000 interrupts per second in every NIC (I
use 2 NICs in a bridge). The question is
¿how can I know if this irq rate is too high or not? ¿how can I know
On Dec 25, 2007 4:21 AM, Jordi Espasa Clofent <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I know how to monitoring the NIC IRQ's consume, with tools as vmstat (-i
> flag), systat (-vm 1) or netstat (-m, -i), but I don't know how to
> determine the maximum interrupts that these NICs can give.
>
> I've
16 matches
Mail list logo