On Wed, Feb 15, 2017 at 1:49 PM, Michael S. Tsirkin wrote:
> The logic is simple really. With #VCPUs == #queues we can reasonably
> assume this box is mostly doing networking so we can set affinity
> the way we like. With VCPUs > queues clearly VM is doing more stuff
> so we need a userspace poli
On Wed, Feb 15, 2017 at 11:17 AM, Michael S. Tsirkin wrote:
> Right. But userspace knows it's random at least. If kernel supplies
> affinity e.g. the way your patch does, userspace ATM accepts this as a
> gospel.
The existing code supplies the same affinity gospels in the #vcpu ==
#queue case to
On Wed, Feb 15, 2017 at 9:42 AM, Michael S. Tsirkin wrote:
>
>
> > For pure network load, assigning each txqueue IRQ exclusively
> > to one of the cores that generates traffic on that queue is the
> > optimal layout in terms of load spreading. Irqbalance does
> > not have the XPS information to ma
On Wed, Feb 8, 2017 at 11:37 AM, Michael S. Tsirkin wrote:
> IIRC irqbalance will bail out and avoid touching affinity
> if you set affinity from driver. Breaking that's not nice.
> Pls correct me if I'm wrong.
I believe you're right that irqbalance will leave the affinity alone.
Irqbalance h
On Sun, Feb 5, 2017 at 11:24 PM, Jason Wang wrote:
>
>
> On 2017年02月03日 14:19, Ben Serebrin wrote:
>>
>> From: Benjamin Serebrin
>>
>> If the number of virtio queue pairs is not equal to the
>> number of VCPUs, the virtio guest driver doesn't assign
t 10:19:05PM -0800, Ben Serebrin wrote:
>> From: Benjamin Serebrin
>>
>> If the number of virtio queue pairs is not equal to the
>> number of VCPUs, the virtio guest driver doesn't assign
>> any CPU affinity for the queue interrupts or the xps
>> aggregation i