Luigi Rizzo wrote:
>
> On Wed, Feb 15, 2006 at 09:20:05PM +0100, Andre Oppermann wrote:
> ...
> > >From my profiling with the Agilent tester there seem to be two areas where
> > the packet filters (ipfw in my test case) burn a lot of CPU per packet.
> > That is a) setup of lots of packet variables
On Wed, Feb 15, 2006 at 09:20:05PM +0100, Andre Oppermann wrote:
...
> >From my profiling with the Agilent tester there seem to be two areas where
> the packet filters (ipfw in my test case) burn a lot of CPU per packet.
> That is a) setup of lots of packet variables unconditionally at the entry
>
Max Laier wrote:
>
> On Friday 10 February 2006 20:54, Julian Elischer wrote:
> > Marcos Bedinelli wrote:
> > > Hello all,
> > >
> > > thanks for the replies. Most of you have suggested that I turn on
> > > polling and give it a try. The machine is in production, hence I need
> > > to schedule dow
On Tue, Feb 14, 2006 at 10:54:34AM -0500, Marcos Bedinelli wrote:
M> Gleb,
M>
M> thanks again for looking into this and for your suggestions.
M>
M> Unfortunately Alpha/Beta/release candidate/pre-release/test versions of
M> software is a "no go" on that machine. Our short term solution will be
Gleb,
thanks again for looking into this and for your suggestions.
Unfortunately Alpha/Beta/release candidate/pre-release/test versions of
software is a "no go" on that machine. Our short term solution will be
to upgrade the CPU to a faster model. After that, I will be able to
assemble a d
On Fri, Feb 10, 2006 at 08:46:00AM -0500, Marcos Bedinelli wrote:
M> We have a 2.4GHz Intel Xeon machine running FreeBSD 6.0-RELEASE-p2. Due
M> to heavy network traffic, CPU utilization on that machine is 100%:
M>
M> ===
M>
M> mull [~]$top -S
M> last pid: 94989; load averages: 3.69, 4.02, 4.
Hi,
On 10-Feb-06, at 16:39, dima wrote:
The second CPU wouldn't help you for sure. There's only one [swi1:
net] kernel thread which deals with all the kernel traffic. The option
of per-CPU [swi: net] threads was discussed on freebsd-arch@ several
months ago, but it wouldn't be implemented so
On Sat, 11 Feb 2006, dima wrote:
There are several software (FreeBSD specific) options though:
1. You should surely try polling(4). 50kpps mean 5 interrupts and
the same amount of context switches, which are quite expensive.
While this was true in the 80's, it is blatantly wrong for any
On Sat, 11 Feb 2006, dima wrote:
The system is mainly being used as a dedicated router. It runs OSPF, BGP
and IPFW (around 150 rules). OSPF and BGP are managed by Quagga. The box
has 2 gigabit interfaces that handle on average 200Mbp/s - 50K packets/s
(inbound and outbound combined), each one
On Fri, 10 Feb 2006 14:57:26 -0500, in sentex.lists.freebsd.net you
wrote:
>
>"If your system runs out of CPU (idle times are perpetually 0%) then
>you need to consider upgrading the CPU or moving to an SMP motherboard
>(multiple CPU's), or perhaps you need to revisit the programs that are
>caus
On Friday 10 February 2006 20:54, Julian Elischer wrote:
> Marcos Bedinelli wrote:
> > Hello all,
> >
> > thanks for the replies. Most of you have suggested that I turn on
> > polling and give it a try. The machine is in production, hence I need
> > to schedule downtime for that.
> >
> > The system
> Hello all,
>
> thanks for the replies. Most of you have suggested that I turn on
> polling and give it a try. The machine is in production, hence I need
> to schedule downtime for that.
>
> The system is mainly being used as a dedicated router. It runs OSPF,
> BGP and IPFW (around 150 rules)
Marcos Bedinelli wrote:
Hi Julian,
On 10-Feb-06, at 14:54, Julian Elischer wrote:
I have found that most people can optimise there ipfw rulests
considerably.
for example: a first rule of:
1 allow ip from any to any in recv {inside interfacfe}
2 allow ip from any to any out xmit {inside int
Marcos Bedinelli (bedinelli) writes:
>
> "If your system runs out of CPU (idle times are perpetually 0%) then
> you need to consider upgrading the CPU or moving to an SMP motherboard
> (multiple CPU's), or perhaps you need to revisit the programs that are
> causing the load and try to optimize
Hi Julian,
On 10-Feb-06, at 14:54, Julian Elischer wrote:
I have found that most people can optimise there ipfw rulests
considerably.
for example: a first rule of:
1 allow ip from any to any in recv {inside interfacfe}
2 allow ip from any to any out xmit {inside interface}
will cut your ipfw
Hi,
On 10-Feb-06, at 13:06, Chuck Swiger wrote:
Marcos Bedinelli wrote:
[ ... ]
mull [~]$vmstat -i
interrupt total rate
irq1: atkbd03466 0
irq6: fdc010 0
irq13: npx0
Marcos Bedinelli wrote:
Hello all,
thanks for the replies. Most of you have suggested that I turn on
polling and give it a try. The machine is in production, hence I need
to schedule downtime for that.
The system is mainly being used as a dedicated router. It runs OSPF,
BGP and IPFW (aroun
Can someone make a little bit clear about vmstat -i.
What exactly we should look for?
On my systems i have also high total column.
On 2/10/06, Chuck Swiger <[EMAIL PROTECTED]> wrote:
>
> Marcos Bedinelli wrote:
> [ ... ]
> > mull [~]$vmstat -i
> > interrupt total
Marcos Bedinelli wrote:
[ ... ]
> mull [~]$vmstat -i
> interrupt total rate
> irq1: atkbd03466 0
> irq6: fdc010 0
> irq13: npx01 0
> irq14: ata0
Marcos Bedinelli wrote:
[ ... ]
> Does anyone know whether a dual CPU system can help us improve the
> situation? I was wondering if the software interrupt threads would be
> divided between the two processors.
>
> Any help/insight is greatly appreciated
Adding SMP into the mix makes thing more c
Marcos Bedinelli (bedinelli) writes:
> I should've mentioned before that we are trying to save some money
> here, therefore the idea is to add a second 2.4GHz Intel Xeon CPU to
> our current box.
>
> However, if there is consensus that a second processor will buy us
> nothing, we'll need to acq
You must migrate to AMD Opteron. INTEL very very suxX.
On Fri, 10 Feb 2006 08:46:00 -0500
Marcos Bedinelli <[EMAIL PROTECTED]> wrote:
> Hello all,
>
> We have a 2.4GHz Intel Xeon machine running FreeBSD 6.0-RELEASE-p2. Due
> to heavy network traffic, CPU utilization on that machine is 100%:
>
22 matches
Mail list logo