Re: [Xen-devel] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online CPU

2017-05-31 Thread Anoob Soman
On 30/05/17 18:42, Boris Ostrovsky wrote: This is not worth API change so I guess we are going to have to use separate calls, as you originally proposed. Sure. I will stick to making two hypercalls. Do we need to look at IRQ affinity mask, if we are going to bind eventchannel to smp_processor

Re: [Xen-devel] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online CPU

2017-05-30 Thread Boris Ostrovsky
On 05/30/2017 11:17 AM, Anoob Soman wrote: > On 16/05/17 20:02, Boris Ostrovsky wrote: > > Hi Boris, > > Sorry for the delay, I was out traveling. > >>> rc = evtchn_bind_to_user(u, bind_interdomain.local_port); >>> -if (rc == 0) >>> +if (rc == 0) { >>> rc = b

Re: [Xen-devel] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online CPU

2017-05-30 Thread Anoob Soman
On 16/05/17 20:02, Boris Ostrovsky wrote: Hi Boris, Sorry for the delay, I was out traveling. rc = evtchn_bind_to_user(u, bind_interdomain.local_port); -if (rc == 0) +if (rc == 0) { rc = bind_interdomain.local_port; +selected_cpu = cpumask_ne

Re: [Xen-devel] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online CPU

2017-05-17 Thread Boris Ostrovsky
>>> "selected_cpu" needs to be protected, but I would like to avoid taking >>> a lock. One way to avoid taking lock (before >>> xen_rebind_evtchn_to_cpu()) would be to use >>> "local_port%num_present_cpus()" or " smp_processor_id()" as index into >>> cpumask_next. >> The latter sounds better to me

Re: [Xen-devel] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online CPU

2017-05-17 Thread Anoob Soman
On 16/05/17 20:02, Boris Ostrovsky wrote: On 05/16/2017 01:15 PM, Anoob Soman wrote: Hi, In our Xenserver testing, I have seen cases when we boot 50 windows VMs together, dom0 kernel softlocks up. The following is a brief description of the problem of what caused soflockup detection code to ki

Re: [Xen-devel] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online CPU

2017-05-16 Thread Boris Ostrovsky
On 05/16/2017 01:15 PM, Anoob Soman wrote: > Hi, > > In our Xenserver testing, I have seen cases when we boot 50 windows > VMs together, dom0 kernel softlocks up. > > The following is a brief description of the problem of what caused > soflockup detection code to kick. A HVM domain boot generates a

[Xen-devel] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online CPU

2017-05-16 Thread Anoob Soman
Hi, In our Xenserver testing, I have seen cases when we boot 50 windows VMs together, dom0 kernel softlocks up. The following is a brief description of the problem of what caused soflockup detection code to kick. A HVM domain boot generates around 200K (evtchn:qemu-dm xen-dyn) interrupts, in