On Mon, 26 Feb 2007 13:42:50 +0300 (MSK) malc wrote:
> On Mon, 26 Feb 2007, Pavel Machek wrote:
>
> > Hi!
> >
> >> [..snip..]
> >>
> The current situation ought to be documented. Better yet some flag
> can
> >>>
> >>> It probably _is_ documented, somewhere :-). If you find nice place
>
On Mon, 26 Feb 2007, Pavel Machek wrote:
Hi!
[..snip..]
The current situation ought to be documented. Better yet some flag
can
It probably _is_ documented, somewhere :-). If you find nice place
where to document it (top manpage?) go ahead with the patch.
How about this:
Looks okay to
Hi!
> [..snip..]
>
> >>The current situation ought to be documented. Better yet some flag
> >>can
> >
> >It probably _is_ documented, somewhere :-). If you find nice place
> >where to document it (top manpage?) go ahead with the patch.
>
>
> How about this:
Looks okay to me. (You should probab
On Wed, 14 Feb 2007, Pavel Machek wrote:
Hi!
[..snip..]
The current situation ought to be documented. Better yet some flag
can
It probably _is_ documented, somewhere :-). If you find nice place
where to document it (top manpage?) go ahead with the patch.
How about this:
CPU load
-
Hi!
>
> >>>I have (had?) code that 'exploits' this. I believe I could eat 90% of cpu
> >>>without being noticed.
> >>
> >>Slightly changed version of hog(around 3 lines in total changed) does that
> >>easily on 2.6.18.3 on PPC.
> >>
> >>http://www.boblycat.org/~malc/apc/load-hog-ppc.png
> >
> >I g
On Wednesday 14 February 2007 18:28, malc wrote:
> On Wed, 14 Feb 2007, Con Kolivas wrote:
> > On Wednesday 14 February 2007 09:01, malc wrote:
> >> On Mon, 12 Feb 2007, Pavel Machek wrote:
> >>> Hi!
>
> [..snip..]
>
> >>> I have (had?) code that 'exploits' this. I believe I could eat 90% of
> >>>
On Wed, 14 Feb 2007, Con Kolivas wrote:
On Wednesday 14 February 2007 09:01, malc wrote:
On Mon, 12 Feb 2007, Pavel Machek wrote:
Hi!
[..snip..]
I have (had?) code that 'exploits' this. I believe I could eat 90% of cpu
without being noticed.
Slightly changed version of hog(around 3 lines
On Wednesday 14 February 2007 09:01, malc wrote:
> On Mon, 12 Feb 2007, Pavel Machek wrote:
> > Hi!
> >
> >> The kernel looks at what is using cpu _only_ during the
> >> timer
> >> interrupt. Which means if your HZ is 1000 it looks at
> >> what is running
> >> at precisely the moment those 1000 tim
On Mon, 12 Feb 2007, Pavel Machek wrote:
Hi!
The kernel looks at what is using cpu _only_ during the
timer
interrupt. Which means if your HZ is 1000 it looks at
what is running
at precisely the moment those 1000 timer ticks occur. It
is
theoretically possible using this measurement system to
u
Hi!
> The kernel looks at what is using cpu _only_ during the
> timer
> interrupt. Which means if your HZ is 1000 it looks at
> what is running
> at precisely the moment those 1000 timer ticks occur. It
> is
> theoretically possible using this measurement system to
> use >99% cpu
> and record
On Mon, 12 Feb 2007, Andrew Burgess wrote:
On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote:
How does the kernel calculates the value it places in `/proc/stat' at
4th position (i.e. "idle: twiddling thumbs")?
..
Later small kernel module was developed that tried to time how much
time
On Mon, 12 Feb 2007, Con Kolivas wrote:
On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote:
Hello,
[..snip..]
The kernel looks at what is using cpu _only_ during the timer
interrupt. Which means if your HZ is 1000 it looks at what is running
at precisely the moment those 1000 timer tick
On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote:
>
> How does the kernel calculates the value it places in `/proc/stat' at
> 4th position (i.e. "idle: twiddling thumbs")?
>
..
>
> Later small kernel module was developed that tried to time how much
> time is spent in the idle handler inside th
On Monday 12 February 2007 18:10, malc wrote:
> On Mon, 12 Feb 2007, Con Kolivas wrote:
> > Lots of confusion comes from this, and often people think their pc
> > suddenly uses a lot less cpu when they change from 1000HZ to 100HZ and
> > use this as an argument/reason for changing to 100HZ when in
On Mon, 12 Feb 2007, Con Kolivas wrote:
On Monday 12 February 2007 16:54, malc wrote:
On Mon, 12 Feb 2007, Con Kolivas wrote:
On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote:
[..snip..]
The kernel looks at what is using cpu _only_ during the timer
interrupt. Which means if your HZ is
On Mon, 12 Feb 2007, Con Kolivas wrote:
On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote:
[..snip..]
The kernel looks at what is using cpu _only_ during the timer
interrupt. Which means if your HZ is 1000 it looks at what is running
at precisely the moment those 1000 timer ticks occur.
On Monday 12 February 2007 16:54, malc wrote:
> On Mon, 12 Feb 2007, Con Kolivas wrote:
> > On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote:
>
> [..snip..]
>
> > The kernel looks at what is using cpu _only_ during the timer
> > interrupt. Which means if your HZ is 1000 it looks at what is run
On Monday 12 February 2007 16:55, Stephen Rothwell wrote:
> On Mon, 12 Feb 2007 16:44:22 +1100 "Con Kolivas" <[EMAIL PROTECTED]> wrote:
> > The kernel looks at what is using cpu _only_ during the timer
> > interrupt. Which means if your HZ is 1000 it looks at what is running
> > at precisely the mo
On Mon, 12 Feb 2007 16:44:22 +1100 "Con Kolivas" <[EMAIL PROTECTED]> wrote:
>
> The kernel looks at what is using cpu _only_ during the timer
> interrupt. Which means if your HZ is 1000 it looks at what is running
> at precisely the moment those 1000 timer ticks occur. It is
> theoretically possibl
On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote:
Hello,
How does the kernel calculates the value it places in `/proc/stat' at
4th position (i.e. "idle: twiddling thumbs")?
For background information as to why this question arose in the first
place read on.
While writing the code dealing
On Thursday 08 February 2007 09:42, you wrote:
> On Wed, 7 Feb 2007, Arjan van de Ven wrote:
> > Marc Donner wrote:
> >> 501: 215717 209388 209430 202514 PCI-MSI-edge
> >> eth10 502:927 1019 1053888 PCI-MSI-edge
> >> eth11
> >
> > this is od
On Wed, 7 Feb 2007, Arjan van de Ven wrote:
Marc Donner wrote:
501: 215717 209388 209430 202514 PCI-MSI-edge eth10
502:927 1019 1053888 PCI-MSI-edge eth11
this is odd, this is not an irq distribution that irqbalance should give you
1
On Wednesday 07 February 2007 06:59, you wrote:
> Marc Donner wrote:
> > 501: 215717 209388 209430 202514 PCI-MSI-edge
> > eth10 502:927 1019 1053888 PCI-MSI-edge
> > eth11
>
> this is odd, this is not an irq distribution that irqbalance sho
Marc Donner wrote:
501: 215717 209388 209430 202514 PCI-MSI-edge eth10
502:927 1019 1053888 PCI-MSI-edge eth11
this is odd, this is not an irq distribution that irqbalance should
give you
1
NMI:451 39 42
> can you send me the output of
>
> cat /proc/interrupts
here it is:
irqblance is running.
network loaded with 600Mbit/s for about 5minutes.
CPU0 CPU1 CPU2 CPU3
0: 37713 41667 41673 49914 IO-APIC-edge timer
1: 0 0
Arjan van de Ven wrote:
Pablo Sebastian Greco wrote:
2296:427426436 134563009 PCI-MSI-edge
eth1
2297:252252 135926471257 PCI-MSI-edge
eth0
this suggests that cores would be busy rather than only one
-
Yes, but you are looki
Pablo Sebastian Greco wrote:
2296:427426436 134563009 PCI-MSI-edge eth1
2297:252252 135926471257 PCI-MSI-edge eth0
this suggests that cores would be busy rather than only one
-
To unsubscribe from this list: send the line "un
Arjan van de Ven wrote:
Marc Donner wrote:
see http://www.irqbalance.org to get irqbalance
I now have tried irqloadbalance, but the same problem.
can you send me the output of
cat /proc/interrupts
(taken when you are or have been loading the network)
maybe there's something fishy going
Marc Donner wrote:
see http://www.irqbalance.org to get irqbalance
I now have tried irqloadbalance, but the same problem.
can you send me the output of
cat /proc/interrupts
(taken when you are or have been loading the network)
maybe there's something fishy going on
-
To unsubscribe from
On Tuesday 06 February 2007 19:09, you wrote:
> On Tue, 2007-02-06 at 18:32 +0100, Marc Donner wrote:
> > Hi @all
> >
> > we have detected some problems on our live systems and so i have build a
> > test setup in our lab as follow:
> >
> > 3 Core 2 duo servers, each with 2 CPUs, with GE interfaces
On Tue, 2007-02-06 at 18:32 +0100, Marc Donner wrote:
> Hi @all
>
> we have detected some problems on our live systems and so i have build a test
> setup in our lab as follow:
>
> 3 Core 2 duo servers, each with 2 CPUs, with GE interfaces. 2 of them are
> only for generating network traffic. t
31 matches
Mail list logo