With NAPI, if i have a few interupts it likely implies i have a huge
network load (and therefore CPU use) and would be much more happier if
you didnt start moving more interupt load to that already loaded CPU
current irqbalance accounts for napi by using the number of packets as
indicator f
The best way to achieve such balancing is to have the network card help
and essentially be able to select the CPU to notify while at the same
time considering:
a) avoiding any packet reordering - which restricts a flow to be
processed to a single CPU at least within a timeframe
b) be per-CPU-load-
Hi Krzysztof,
On 12/29/06, Krzysztof Oledzki <[EMAIL PROTECTED]> wrote:
On Wed, 27 Dec 2006, jamal wrote:
> On Wed, 2006-27-12 at 09:09 +0200, Robert Iakobashvili wrote:
>
>>
>> My scenario is treatment of RTP packets in kernel space with a single network
>> card (both Rx and Tx). The default
On Wed, 27 Dec 2006, jamal wrote:
On Wed, 2006-27-12 at 09:09 +0200, Robert Iakobashvili wrote:
My scenario is treatment of RTP packets in kernel space with a single network
card (both Rx and Tx). The default of the Intel 5000 series chipset is
affinity of each
network card to a certain CPU
On Wed, 2006-12-27 at 09:44 -0500, jamal wrote:
> On Wed, 2006-27-12 at 14:08 +0100, Arjan van de Ven wrote:
>
> > sure; however the kernel doesn't provide more accurate information
> > currently (and I doubt it could even, it's not so easy to figure out
> > which interface triggered the softirq i
On Wed, 2006-27-12 at 14:08 +0100, Arjan van de Ven wrote:
> sure; however the kernel doesn't provide more accurate information
> currently (and I doubt it could even, it's not so easy to figure out
> which interface triggered the softirq if 2 interfaces share the cpu, and
> then, how much work ca
On Wed, 2006-27-12 at 09:09 +0200, Robert Iakobashvili wrote:
>
> My scenario is treatment of RTP packets in kernel space with a single network
> card (both Rx and Tx). The default of the Intel 5000 series chipset is
> affinity of each
> network card to a certain CPU. Currently, neither with irqb
> Although still insufficient in certain cases. All flows are not equal; as an
> example, an IPSEC flow with 1000 packets bound to one CPU will likely
> utilize more cycles than 5000 packets that are being plain forwarded on
> another CPU.
sure; however the kernel doesn't provide more accurate i
On 12/27/06, jamal <[EMAIL PROTECTED]> wrote:
On Wed, 2006-27-12 at 01:28 +0100, Arjan van de Ven wrote:
> current irqbalance accounts for napi by using the number of packets as
> indicator for load, not the number of interrupts. (for network
> interrupts obviously)
>
Sounds a lot more promisin
On Wed, 2006-27-12 at 01:28 +0100, Arjan van de Ven wrote:
> current irqbalance accounts for napi by using the number of packets as
> indicator for load, not the number of interrupts. (for network
> interrupts obviously)
>
Sounds a lot more promising.
Although still insufficient in certain cases
On Tue, 2006-12-26 at 17:46 -0500, jamal wrote:
> On Tue, 2006-26-12 at 23:06 +0100, Arjan van de Ven wrote:
>
> > it is; that's why irqbalance tries really hard (with a few very rare
> > exceptions) to keep networking irqs to the same cpu all the time...
> >
>
> The problem with irqbalance when
On Tue, 2006-26-12 at 23:06 +0100, Arjan van de Ven wrote:
> it is; that's why irqbalance tries really hard (with a few very rare
> exceptions) to keep networking irqs to the same cpu all the time...
>
The problem with irqbalance when i last used it is it doesnt take into
consideration CPU utili
On Tue, 2006-26-12 at 21:51 +0200, Robert Iakobashvili wrote:
BTW, turn on PCI-E on in the kernel build and do cat /proc/interupts to
see what i mean.
> In meanwhile I have removed all userland processes from CPU0,
> that handles network card interrupts and all packet-processing (kernel-space).
>
On Tue, 2006-12-26 at 13:44 -0500, jamal wrote:
> If you compile in PCI-E support you should have more control of the
> MSI-X, no? I would tie the MSI to a specific processor statically; my
> past experiences with any form of interupt balancing with network loads
> has been horrible.
it is; that'
On 12/26/06, jamal <[EMAIL PROTECTED]> wrote:
If you compile in PCI-E support you should have more control of the
MSI-X, no? I would tie the MSI to a specific processor statically; my
past experiences with any form of interupt balancing with network loads
has been horrible.
cheers,
jamal
Than
If you compile in PCI-E support you should have more control of the
MSI-X, no? I would tie the MSI to a specific processor statically; my
past experiences with any form of interupt balancing with network loads
has been horrible.
cheers,
jamal
On Mon, 2006-25-12 at 14:54 +0200, Robert Iakobashvil
Arjan,
On 12/25/06, Arjan van de Ven <[EMAIL PROTECTED]> wrote:
On Mon, 2006-12-25 at 13:26 +0200, Robert Iakobashvili wrote:
>
> > Am I understanding you correctly that you want to spread the load of the
> > networking IRQ roughly equally over 2 cpus (or cores or ..)?
>
> Yes, 4 cores.
>
> > If
On Mon, 2006-12-25 at 13:26 +0200, Robert Iakobashvili wrote:
>
> > Am I understanding you correctly that you want to spread the load of the
> > networking IRQ roughly equally over 2 cpus (or cores or ..)?
>
> Yes, 4 cores.
>
> > If so, that is very very suboptimal, especially for networking (si
Hi Arjan,
On 12/25/06, Arjan van de Ven <[EMAIL PROTECTED]> wrote:
On Sun, 2006-12-24 at 11:34 +0200, Robert Iakobashvili wrote:
> Sorry for repeating, now in text mode.
>
> Is there a way to balance IRQs from a network card among Intel CPU cores
> with Intel 5000 series chipset?
>
> We tried th
On Sun, 2006-12-24 at 11:34 +0200, Robert Iakobashvili wrote:
> Sorry for repeating, now in text mode.
>
> Is there a way to balance IRQs from a network card among Intel CPU cores
> with Intel 5000 series chipset?
>
> We tried the Broadcom network card (lspci is below) both in MSI and
> io-apic m
Sorry for repeating, now in text mode.
Is there a way to balance IRQs from a network card among Intel CPU cores
with Intel 5000 series chipset?
We tried the Broadcom network card (lspci is below) both in MSI and
io-apic mode, but found that the card interrupt may be moved to
another logical CPU,
21 matches
Mail list logo