xen_pt_pci_config_access_check checks if addr >= 0xFF. 0xFF is a valid
address and should not be ignored.
Signed-off-by: Anoob Soman
---
hw/xen/xen_pt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index b6d71bb..375efa6 100644
--- a
CPU0 never runs watchdog (process context), triggering a
softlockup detection code to panic.
Binding evtchn:qemu-dm to next online VCPU, will spread hardirq
processing evenly across different CPU. Later, irqbalance will try to balance
evtchn:qemu-dm, if required.
Signed-off-by: Anoob Soman
---
On 06/06/17 21:58, Boris Ostrovsky wrote:
Oh well, so much for my request to move it. So you are going to make it
global to the file.
Sorry about build breakage. I will move
DEFINE_PER_CPU(bind_last_selected_cpu) above
evtchn_bind_interdom_next_vcpu().
-Anoob.
On 06/06/17 20:41, Boris Ostrovsky wrote:
There is a single call site for rebind_irq_to_cpu() so why not call
xen_rebind_evtchn_to_cpu() directly?
Fair enough, I will change it.
+ raw_spin_lock_irqsave(&desc->lock, flags);
Is there a reason why you are using raw_ version?
desc->l
CPU0 never runs watchdog (process context), triggering a
softlockup detection code to panic.
Binding evtchn:qemu-dm to next online VCPU, will spread hardirq
processing evenly across different CPU. Later, irqbalance will try to balance
evtchn:qemu-dm, if required.
Signed-off-by: Anoob Soman
---
On 05/06/17 17:46, Boris Ostrovsky wrote:
+static void evtchn_bind_interdom_next_vcpu(int evtchn)
+{
+ unsigned int selected_cpu, irq;
+ struct irq_desc *desc = NULL; <
Oh, thanks. I will send out a V2, with the modifications.
-Anoob.
___
On 05/06/17 16:32, Boris Ostrovsky wrote:
I believe we do need to take affinity into consideration even if the
chance that it is non-default is small.
Agreed.
I am not opposed to having bind_last_selected_cpu percpu, I just wanted
to understand the reason better. Additional locking would be a
On 05/06/17 15:10, Boris Ostrovsky wrote:
The reason for percpu instead of global, was to avoid locking. We can
have a global variable (last_cpu) without locking, but value of
last_cpu wont be consistent, without locks. Moreover, since
irq_affinity is also used in the calculation of cpu to bind,
On 02/06/17 17:24, Boris Ostrovsky wrote:
static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
bool force)
diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
index 10f1ef5..1192f24 100644
--- a/drivers/xen/evtchn.c
+++ b/drivers/xen
On 02/06/17 16:10, Juergen Gross wrote:
I'd prefer the to have irq disabled from taking the lock until here.
This will avoid problems due to preemption and will be faster as it
avoids one irq on/off cycle. So:
local_irq_disable();
raw_spin_lock();
...
raw_spin_unlock();
this_cpu_write();
xen_re
CPU0 never runs watchdog (process context), triggering a
softlockup detection code to panic.
Binding evtchn:qemu-dm to next online VCPU, will spread hardirq
processing evenly across different CPU. Later, irqbalance will try to balance
evtchn:qemu-dm, if required.
Signed-off-by: Anoob Soman
---
d
Hi,
Can someone explain, why evtchn_fifo_unmask() requires irqs_disabled().
What happens, if irqs are not disabled ?
Thanks,
Anoob.
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
On 30/05/17 18:42, Boris Ostrovsky wrote:
This is not worth API change so I guess we are going to have to use
separate calls, as you originally proposed.
Sure. I will stick to making two hypercalls.
Do we need to look at IRQ affinity mask, if we are going to bind
eventchannel to smp_processor
On 16/05/17 20:02, Boris Ostrovsky wrote:
Hi Boris,
Sorry for the delay, I was out traveling.
rc = evtchn_bind_to_user(u, bind_interdomain.local_port);
-if (rc == 0)
+if (rc == 0) {
rc = bind_interdomain.local_port;
+selected_cpu = cpumask_ne
On 16/05/17 20:02, Boris Ostrovsky wrote:
On 05/16/2017 01:15 PM, Anoob Soman wrote:
Hi,
In our Xenserver testing, I have seen cases when we boot 50 windows
VMs together, dom0 kernel softlocks up.
The following is a brief description of the problem of what caused
soflockup detection code to
Hi,
In our Xenserver testing, I have seen cases when we boot 50 windows VMs
together, dom0 kernel softlocks up.
The following is a brief description of the problem of what caused
soflockup detection code to kick. A HVM domain boot generates around
200K (evtchn:qemu-dm xen-dyn) interrupts, in
Allocation of new_hash, inside xenvif_new_hash(), always happen
in softirq context, so use GFP_ATOMIC instead of GFP_KERNEL for new
hash allocation.
Signed-off-by: Anoob Soman
---
drivers/net/xen-netback/hash.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/xen
17 matches
Mail list logo