[Xen-devel] PCI-Passthrough Error: transmit queue timed out

2015-01-31 Thread openlui
Hi, all:
I have tried PCI-Passthrough to DomU in Xen. However, If we send packets to 
DomU for a while, there is chance that the networking of Domu will be 
disconnected. The corresponding syslog messages are as shown at the end of mail.
   And I found the following analysis at [1], I am wondering if it is the 
reason for it.  Does someone also meet this problem, and can give me some 
advice about it?  The environment is as follows:
  Dom0: SUSE 12 (3.12.28)
  DomU: 3.17.4 mainline


[1] 
http://martinbj2008.github.io/blog/2014/09/17/netdevice-watchdog-cause-tx-queue-schedule/


2015-01-31T04:53:45.005161-05:00 linux-gbze kernel: [63331.820079] 
[ cut here ]
2015-01-31T04:53:45.005187-05:00 linux-gbze kernel: [63331.820163] WARNING: 
CPU: 0 PID: 0 at net/sched/sch_generic.c:264 dev_watchdog+0x266/0x270()
2015-01-31T04:53:45.005190-05:00 linux-gbze kernel: [63331.820196] NETDEV 
WATCHDOG: eth0 (ixgbe): transmit queue 1 timed out
2015-01-31T04:53:45.005192-05:00 linux-gbze kernel: [63331.820227] Modules 
linked in: iptable_filter(E) ip_tables(E) x_tables(E) openvswitch(E) gre(E) 
vxlan(E) udp_tunnel(E) libcrc32c(E) xen_netback(OE) ixgbe(OE) nls_utf8(E) 
isofs(E) fuse(E) iscsi_ibft(E) iscsi_boot_sysfs(E) af_packet(E) intel_rapl(E) 
joydev(E) ppdev(E) crct10dif_pclmul(E) parport_pc(E) crc32_pclmul(E) parport(E) 
crc32c_intel(E) ghash_clmulni_intel(E) ptp(E) aesni_intel(E) pps_core(E) 
aes_x86_64(E) mdio(E) processor(E) lrw(E) dca(E) gf128mul(E) glue_helper(E) 
serio_raw(E) i2c_piix4(E) pcspkr(E) ablk_helper(E) button(E) cryptd(E) 
dm_mod(E) ext4(E) crc16(E) mbcache(E) jbd2(E) sr_mod(E) cdrom(E) ata_generic(E) 
ata_piix(E) ahci(E) libahci(E) xen_blkfront(E) floppy(E) cirrus(E) 
drm_kms_helper(E) ttm(E) drm(E) libata(E) sg(E) scsi_mod(E) autofs4(E) [last 
unloaded: ixgbe]


2015-01-31T04:53:45.005195-05:00 linux-gbze kernel: [63331.820337] CPU: 0 PID: 
0 Comm: swapper/0 Tainted: G   OE  3.17.4-4-default #10
2015-01-31T04:53:45.005197-05:00 linux-gbze kernel: [63331.820368] Hardware 
name: Xen HVM domU, BIOS 4.4.1_06-2.2 10/08/2014
2015-01-31T04:53:45.005199-05:00 linux-gbze kernel: [63331.820398]  
0009 88010f803db0 8158c8e7 88010f803df8
2015-01-31T04:53:45.005200-05:00 linux-gbze kernel: [63331.820432]  
88010f803de8 8106d9ed 0001 8800ead6
2015-01-31T04:53:45.005202-05:00 linux-gbze kernel: [63331.820464]  
0040  8800ead6 88010f803e48


2015-01-31T04:53:45.005203-05:00 linux-gbze kernel: [63331.820497] Call Trace:


2015-01-31T04:53:45.005205-05:00 linux-gbze kernel: [63331.820528]
[] dump_stack+0x45/0x56
2015-01-31T04:53:45.005206-05:00 linux-gbze kernel: [63331.820571]  
[] warn_slowpath_common+0x7d/0xa0
2015-01-31T04:53:45.005208-05:00 linux-gbze kernel: [63331.820603]  
[] warn_slowpath_fmt+0x4c/0x50
2015-01-31T04:53:45.005210-05:00 linux-gbze kernel: [63331.820641]  
[] ? xen_timer_interrupt+0x10f/0x150
2015-01-31T04:53:45.005212-05:00 linux-gbze kernel: [63331.820675]  
[] dev_watchdog+0x266/0x270
2015-01-31T04:53:45.005213-05:00 linux-gbze kernel: [63331.820708]  
[] ? dev_graft_qdisc+0x80/0x80
2015-01-31T04:53:45.005215-05:00 linux-gbze kernel: [63331.820744]  
[] call_timer_fn+0x36/0x100
2015-01-31T04:53:45.005216-05:00 linux-gbze kernel: [63331.820777]  
[] ? dev_graft_qdisc+0x80/0x80
2015-01-31T04:53:45.005218-05:00 linux-gbze kernel: [63331.820811]  
[] run_timer_softirq+0x1fa/0x2e0
2015-01-31T04:53:45.005220-05:00 linux-gbze kernel: [63331.820845]  
[] __do_softirq+0xe5/0x280
2015-01-31T04:53:45.005221-05:00 linux-gbze kernel: [63331.820878]  
[] irq_exit+0xad/0xc0
2015-01-31T04:53:45.005223-05:00 linux-gbze kernel: [63331.820912]  
[] xen_evtchn_do_upcall+0x38/0x50
2015-01-31T04:53:45.005224-05:00 linux-gbze kernel: [63331.820948]  
[] xen_hvm_callback_vector+0x6d/0x80
2015-01-31T04:53:45.005225-05:00 linux-gbze kernel: [63331.820978]
[] ? native_safe_halt+0x6/0x10
2015-01-31T04:53:45.005227-05:00 linux-gbze kernel: [63331.821020]  
[] default_idle+0x1f/0xc0
2015-01-31T04:53:45.005229-05:00 linux-gbze kernel: [63331.821054]  
[] arch_cpu_idle+0xf/0x20
2015-01-31T04:53:45.005231-05:00 linux-gbze kernel: [63331.821087]  
[] cpu_startup_entry+0x2f4/0x330
2015-01-31T04:53:45.005232-05:00 linux-gbze kernel: [63331.821121]  
[] rest_init+0x77/0x80
2015-01-31T04:53:45.005234-05:00 linux-gbze kernel: [63331.821156]  
[] start_kernel+0x46f/0x47a
2015-01-31T04:53:45.005235-05:00 linux-gbze kernel: [63331.821188]  
[] ? set_init_arg+0x53/0x53
2015-01-31T04:53:45.005236-05:00 linux-gbze kernel: [63331.821222]  
[] ? early_idt_handlers+0x120/0x120
2015-01-31T04:53:45.005238-05:00 linux-gbze kernel: [63331.821255]  
[] x86_64_start_reservations+0x2a/0x2c
2015-01-31T04:53:45.005240-05:00 linux-gbze kernel: [63331.821288]  
[] x86_64_start_kernel+0x143/0x152
2015-01-31T04:53:45.005242-05:00 linux-gbze kernel: [63331.821320] ---[ end 
trace a40fbdbc6585a982 ]---
2015-01-31

[Xen-devel] apic-v reduce network performance in my test case

2015-01-31 Thread Liuqiming (John)

Hi all,

Recently I met an odd performance problem: when I turn on APIC 
Virtualization feature (apicv=1), the network performance of a windows 
guest become worse.


My test case like this: host only have one windows 2008 R2 HVM 
guest running,and this guest has a SR-IOV VF network passthrough to it.
Guest using this network access a NAS device. No fontend or backend of 
network and storage, all data transfered through network.


The xentrace data shows: the mainly difference between apicv and 
non-apicv, is the way guest write apic registers, and 
EXIT_REASON_MSR_WRITE vmexit cost much more time than 
EXIT_REASON_APIC_WRITE, but when using WRMSR, the PAUSE vmexit is much 
less than using APIC-v.

This is the odd part,any ideas?

APIC-v OFF:
 4099582 VMEXIT   3467051359128 TSC HLT
10135140 VMEXIT 42484175528 TSC WRMSR
 1651714 VMEXIT  9785961276 TSC I/O instruction
  532702 VMEXIT  3887971388 TSC External interrupt
  290546 VMEXIT  2262312440 TSC PAUSE
  588077 VMEXIT   914905312 TSC Control-register accesses
  383617 VMEXIT   453329940 TSC Exception or non-maskable 
interrupt (NMI)

  132717 VMEXIT   232289792 TSC Interrupt window
   25534 VMEXIT   198718764 TSC EPT violation
   53969 VMEXIT62886752 TSC TPR below threshold
7996 VMEXIT34735376 TSC RDMSR
1615 VMEXIT16042768 TSC VMCALL
 147 VMEXIT  272320 TSC CPUID
   7 VMEXIT6484 TSC WBINVD
   2 VMEXIT6308 TSC MOV DR  

APIC-v ON:
 3717629 VMEXIT   3459905385332 TSC HLT
 2282403 VMEXIT 23099880196 TSC APIC write
 3900448 VMEXIT 13073253548 TSC PAUSE
 1643729 VMEXIT 11719626776 TSC I/O instruction
 2194667 VMEXIT  5321640708 TSC WRMSR
  214425 VMEXIT  2198994944 TSC External interrupt
  566795 VMEXIT  1940710108 TSC Control-register accesses
  342688 VMEXIT   659665532 TSC Exception or non-maskable 
interrupt (NMI)

  190623 VMEXIT   644411612 TSC VMCALL
  188657 VMEXIT   295956932 TSC Virtualized EOI
   24350 VMEXIT   194817152 TSC EPT violation
4393 VMEXIT23282044 TSC RDMSR
 179 VMEXIT 1688676 TSC CPUID
   7 VMEXIT6884 TSC WBINVD
   1 VMEXIT4200 TSC MOV DR

In commit 7f2e992b824ec62a2818e64390ac2ccfbd74e6b7
"VMX/Viridian: suppress MSR-based APIC suggestion when having APIC-V", 
msr based apic is disabled when apic-v is on, I wonder can they co-exist 
in some way? seems for windows guest msr-based apic has better performance.



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen/arm: Fix rtds scheduler for arm

2015-01-31 Thread Ian Campbell
On Fri, 2015-01-30 at 18:19 +0200, Denys Drozdov wrote:

> since context_save executed right from IRQ level. The arm interrupt
> handling differs from x86. ARM is handling  context_saved with IRQ
> disabled in CPSR, though x86 has IRQs on. Thus it is failing on ASSERT
> inside spin_lock_irq when running on ARM. I guess it should work on
> x86, so this issue is ARM platform specific.

FWIW I was waiting for it to happen to a xen-unstable run but the latest
osstest gate run at
http://www.chiark.greenend.org.uk/~xensrcts/logs/33915/ which included
Dario's patches to rationalize the schedulr tests vs. archs also
resulted in a similar sounding failure on credit2:
http://www.chiark.greenend.org.uk/~xensrcts/logs/33915/test-armhf-armhf-xl-credit2/info.html
http://www.chiark.greenend.org.uk/~xensrcts/logs/33915/test-armhf-armhf-xl-credit2/serial-marilith-n5.txt

[Thu Jan 29 13:29:28 2015](XEN) Assertion 'local_irq_is_enabled()' 
failed at spinlock.c:137
[Thu Jan 29 13:29:28 2015](XEN) [ Xen-4.6-unstable  arm32  debug=y  
Not tainted ]
[Thu Jan 29 13:29:28 2015](XEN) CPU:0
[Thu Jan 29 13:29:28 2015](XEN) PC: 00229734 
_spin_lock_irq+0x18/0x94
[Thu Jan 29 13:29:28 2015](XEN) CPSR:   20da MODE:Hypervisor
[Thu Jan 29 13:29:28 2015](XEN)  R0: 4000823c R1:  R2: 
02faf080 R3: 60da
[Thu Jan 29 13:29:28 2015](XEN)  R4: 4000823c R5: 4000d000 R6: 
4000823c R7: 002ee020
[Thu Jan 29 13:29:28 2015](XEN)  R8: 4000f218 R9:  
R10:0026fe08 R11:7ffcfefc R12:0002
[Thu Jan 29 13:29:28 2015](XEN) HYP: SP: 7ffcfeec LR: 0021f34c
[Thu Jan 29 13:29:28 2015](XEN) 
[Thu Jan 29 13:29:28 2015](XEN)   VTCR_EL2: 80003558
[Thu Jan 29 13:29:28 2015](XEN)  VTTBR_EL2: 00010002b9ffc000
[Thu Jan 29 13:29:28 2015](XEN) 
[Thu Jan 29 13:29:28 2015](XEN)  SCTLR_EL2: 30cd187f
[Thu Jan 29 13:29:28 2015](XEN)HCR_EL2: 0038643f
[Thu Jan 29 13:29:28 2015](XEN)  TTBR0_EL2: ff6e8000
[Thu Jan 29 13:29:28 2015](XEN) 
[Thu Jan 29 13:29:28 2015](XEN)ESR_EL2: 
[Thu Jan 29 13:29:28 2015](XEN)  HPFAR_EL2: 
[Thu Jan 29 13:29:28 2015](XEN)  HDFAR: 
[Thu Jan 29 13:29:28 2015](XEN)  HIFAR: 
[Thu Jan 29 13:29:28 2015](XEN) 
[Thu Jan 29 13:29:28 2015](XEN) Xen stack trace from sp=7ffcfeec:
[Thu Jan 29 13:29:28 2015](XEN)0024d068  002f0328 7ffcff2c 
0021f34c   6591e5c1
[Thu Jan 29 13:29:28 2015](XEN) 4000d000 4000d000  
   7ffcff3c
[Thu Jan 29 13:29:28 2015](XEN)002285dc 7fff  7ffcff4c 
00242614   7ffcff54
[Thu Jan 29 13:29:28 2015](XEN)002427c8  00242b6c  
 2800  
[Thu Jan 29 13:29:28 2015](XEN)    
   
[Thu Jan 29 13:29:28 2015](XEN)  27a0 01d3 
   
[Thu Jan 29 13:29:28 2015](XEN)    
   
[Thu Jan 29 13:29:28 2015](XEN)    
   
[Thu Jan 29 13:29:28 2015](XEN)    

[Thu Jan 29 13:29:28 2015](XEN) Xen call trace:
[Thu Jan 29 13:29:28 2015](XEN)[<00229734>] 
_spin_lock_irq+0x18/0x94 (PC)
[Thu Jan 29 13:29:28 2015](XEN)[<0021f34c>] 
csched2_context_saved+0x44/0x18c (LR)
[Thu Jan 29 13:29:28 2015](XEN)[<0021f34c>] 
csched2_context_saved+0x44/0x18c
[Thu Jan 29 13:29:28 2015](XEN)[<002285dc>] context_saved+0x58/0x80
[Thu Jan 29 13:29:28 2015](XEN)[<00242614>] 
schedule_tail+0x148/0x2f0
[Thu Jan 29 13:29:28 2015](XEN)[<002427c8>] 
continue_new_vcpu+0xc/0x70
[Thu Jan 29 13:29:28 2015](XEN)[<00242b6c>] context_switch+0x74/0x7c
[Thu Jan 29 13:29:28 2015](XEN) 
[Thu Jan 29 13:29:28 2015](XEN) 
[Thu Jan 29 13:29:28 2015](XEN) 
[Thu Jan 29 13:29:28 2015](XEN) Panic on CPU 0:
[Thu Jan 29 13:29:28 2015](XEN) Assertion 'local_irq_is_enabled()' 
failed at spinlock.c:137
[Thu Jan 29 13:29:28 2015](XEN) 

I didn't have a chance yet to think about whether the ARM ctxt switch or
the scheduler(s) are in the wrong here...

Ian.



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH V3 05/12] xen: Introduce vm_event

2015-01-31 Thread Tamas K Lengyel
On Fri, Jan 30, 2015 at 6:25 PM, Daniel De Graaf  wrote:
> On 01/29/2015 04:46 PM, Tamas K Lengyel wrote:
>>
>> To make it easier to review the renaming process of mem_event -> vm_event,
>> the process is broken into three pieces, of which this patch is the first.
>> In this patch the vm_event subsystem is introduced and hooked into the
>> build
>> process, but it is not yet used anywhere.
>>
>> Signed-off-by: Tamas K Lengyel 
>
>
> [...]
>>
>> diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
>> index f20e89c..d6d403a 100644
>> --- a/xen/include/xsm/dummy.h
>> +++ b/xen/include/xsm/dummy.h
>> @@ -525,6 +525,18 @@ static XSM_INLINE int
>> xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
>>   XSM_ASSERT_ACTION(XSM_DM_PRIV);
>>   return xsm_default_action(action, current->domain, d);
>>   }
>> +
>> +static XSM_INLINE int xsm_vm_event_control(XSM_DEFAULT_ARG struct domain
>> *d, int mode, int op)
>> +{
>> +XSM_ASSERT_ACTION(XSM_PRIV);
>> +return xsm_default_action(action, current->domain, d);
>> +}
>> +
>> +static XSM_INLINE int xsm_vm_event_op(XSM_DEFAULT_ARG struct domain *d,
>> int op)
>> +{
>> +XSM_ASSERT_ACTION(XSM_DM_PRIV);
>> +return xsm_default_action(action, current->domain, d);
>> +}
>>   #endif
>>
> [...]
>>
>> diff --git a/xen/xsm/flask/policy/access_vectors
>> b/xen/xsm/flask/policy/access_vectors
>> index 1da9f63..a4241b5 100644
>> --- a/xen/xsm/flask/policy/access_vectors
>> +++ b/xen/xsm/flask/policy/access_vectors
>> @@ -250,6 +250,7 @@ class hvm
>>   hvmctl
>>   # XEN_DOMCTL_set_access_required
>>   mem_event
>> +vm_event
>>   # XEN_DOMCTL_mem_sharing_op and XENMEM_sharing_op_{share,add_physmap}
>> with:
>>   #  source = the domain making the hypercall
>>   #  target = domain whose memory is being shared
>>
>
> This implies that device model domains should be allowed to use the
> operations
> covered by xsm_vm_event_op but not those covered by xsm_vm_event_control.
> If this is how the eventual operations are intended to be used, the FLASK
> permissions also need to be split so that a similar distinction can be made
> in
> the policy.
>
> After looking at the later patches in this series, this appears to be a flaw
> in
> the existing FLASK hooks that got copied over.  While it is still useful to
> fix,
> it  may be better to make the split in a separate patch from the renames.
> Now
> that VM events apply to more than just HVM domains, it may be useful to move
> the new permission(s) from class hvm to either domain2 or mmu.
>
> --
> Daniel De Graaf
> National Security Agency

Moving it to domain2 would make sense to me. The namings seem to be
pretty poor so I have a hard time understanding why xsm_vm_event_op
and xsm_vm_event_control differ when it comes to device model domains.
The event_op corresponds to memops for access, paging and sharing
while event_control for the domctl that enables/disables the rings. So
yes, I think splitting the name for these separating things would make
sense to clarify what they represent but whether that restriction on
device model domains was intentional I'm not sure about.

Tamas

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Qemu-devel] [RFC][PATCH 1/1] libxl: add one machine property to support IGD GFX passthrough

2015-01-31 Thread Wei Liu
On Sat, Jan 31, 2015 at 07:07:16AM +, Xu, Quan wrote:
> 
> 
> > -Original Message-
> > From: xen-devel-boun...@lists.xen.org
> > [mailto:xen-devel-boun...@lists.xen.org] On Behalf Of Wei Liu
> > Sent: Friday, January 30, 2015 8:26 PM
> > To: Chen, Tiejun
> > Cc: Wei Liu; ian.campb...@citrix.com; m...@redhat.com; Ian Jackson;
> > qemu-de...@nongnu.org; xen-devel@lists.xen.org; Gerd Hoffmann
> > Subject: Re: [Xen-devel] [Qemu-devel] [RFC][PATCH 1/1] libxl: add one 
> > machine
> > property to support IGD GFX passthrough
> > 
> > On Fri, Jan 30, 2015 at 08:56:48AM +0800, Chen, Tiejun wrote:
> > [...]
> > > >>>
> > > >>>Just remember to handle old option in libxl if your old option is
> > > >>>already released by some older version of QEMUs.
> > > >>
> > > >>I just drop that old option, -gfx_passthru, if we're under qemu
> > > >>upstream circumstance, like this,
> > > >>
> > > >
> > > >The question is, is there any version of qemu upstream that has been
> > > >released that has the old option (-gfx-passthru)?
> > >
> > > No. Just now we're starting to support IGD passthrough in qemu upstream.
> > >
> > 
> > Right, as of QEMU 2.2.0 there's no support of IGD passthrough in QMEU
> > upstream.
> > 
> 
> Just a question:
>Now what features do vt-d support? Thanks.
> 

I don't know whether vt-d is supported in qemu upstream.

But if there is support in upstream and you want to change some options,
the same principle in my previous email still applies.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 1/5] multiboot2: Fix information request tag size calculation

2015-01-31 Thread Lennart Sorensen
On Fri, Jan 30, 2015 at 01:52:09PM -0700, Ben Hildred wrote:
> Why do you want the size of a pointer instead of the size of the structure?

Isn't *request_tag the dereferenced pointer, and hence is the size of
the structure, where as before it was the size of a pointer?

-- 
Len Sorensen

> On Fri, Jan 30, 2015 at 10:59 AM, Daniel Kiper 
> wrote:
> 
> > Signed-off-by: Daniel Kiper 
> > ---
> >  grub-core/loader/multiboot_mbi2.c |2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/grub-core/loader/multiboot_mbi2.c
> > b/grub-core/loader/multiboot_mbi2.c
> > index 6f74aee..d7c19bc 100644
> > --- a/grub-core/loader/multiboot_mbi2.c
> > +++ b/grub-core/loader/multiboot_mbi2.c
> > @@ -150,7 +150,7 @@ grub_multiboot_load (grub_file_t file, const char
> > *filename)
> > = (struct multiboot_header_tag_information_request *) tag;
> >   if (request_tag->flags & MULTIBOOT_HEADER_TAG_OPTIONAL)
> > break;
> > - for (i = 0; i < (request_tag->size - sizeof (request_tag))
> > + for (i = 0; i < (request_tag->size - sizeof (*request_tag))
> >  / sizeof (request_tag->requests[0]); i++)
> > switch (request_tag->requests[i])
> >   {
> > --
> > 1.7.10.4
> >
> >
> > ___
> > Grub-devel mailing list
> > grub-de...@gnu.org
> > https://lists.gnu.org/mailman/listinfo/grub-devel
> >
> 
> 
> 
> -- 
> --
> Ben Hildred
> Automation Support Services
> 303 815 6721

> ___
> Grub-devel mailing list
> grub-de...@gnu.org
> https://lists.gnu.org/mailman/listinfo/grub-devel


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 1/5] multiboot2: Fix information request tag size calculation

2015-01-31 Thread Ben Hildred
Why do you want the size of a pointer instead of the size of the structure?

On Fri, Jan 30, 2015 at 10:59 AM, Daniel Kiper 
wrote:

> Signed-off-by: Daniel Kiper 
> ---
>  grub-core/loader/multiboot_mbi2.c |2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/grub-core/loader/multiboot_mbi2.c
> b/grub-core/loader/multiboot_mbi2.c
> index 6f74aee..d7c19bc 100644
> --- a/grub-core/loader/multiboot_mbi2.c
> +++ b/grub-core/loader/multiboot_mbi2.c
> @@ -150,7 +150,7 @@ grub_multiboot_load (grub_file_t file, const char
> *filename)
> = (struct multiboot_header_tag_information_request *) tag;
>   if (request_tag->flags & MULTIBOOT_HEADER_TAG_OPTIONAL)
> break;
> - for (i = 0; i < (request_tag->size - sizeof (request_tag))
> + for (i = 0; i < (request_tag->size - sizeof (*request_tag))
>  / sizeof (request_tag->requests[0]); i++)
> switch (request_tag->requests[i])
>   {
> --
> 1.7.10.4
>
>
> ___
> Grub-devel mailing list
> grub-de...@gnu.org
> https://lists.gnu.org/mailman/listinfo/grub-devel
>



-- 
--
Ben Hildred
Automation Support Services
303 815 6721
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Qemu-devel] [RFC][PATCH 1/1] libxl: add one machine property to support IGD GFX passthrough

2015-01-31 Thread Xu, Quan


> -Original Message-
> From: Wei Liu [mailto:wei.l...@citrix.com]
> Sent: Saturday, January 31, 2015 10:33 PM
> To: Xu, Quan
> Cc: Wei Liu; Chen, Tiejun; ian.campb...@citrix.com; m...@redhat.com; Ian 
> Jackson;
> qemu-de...@nongnu.org; xen-devel@lists.xen.org; Gerd Hoffmann
> Subject: Re: [Xen-devel] [Qemu-devel] [RFC][PATCH 1/1] libxl: add one machine
> property to support IGD GFX passthrough
> 
> On Sat, Jan 31, 2015 at 07:07:16AM +, Xu, Quan wrote:
> >
> >
> > > -Original Message-
> > > From: xen-devel-boun...@lists.xen.org
> > > [mailto:xen-devel-boun...@lists.xen.org] On Behalf Of Wei Liu
> > > Sent: Friday, January 30, 2015 8:26 PM
> > > To: Chen, Tiejun
> > > Cc: Wei Liu; ian.campb...@citrix.com; m...@redhat.com; Ian Jackson;
> > > qemu-de...@nongnu.org; xen-devel@lists.xen.org; Gerd Hoffmann
> > > Subject: Re: [Xen-devel] [Qemu-devel] [RFC][PATCH 1/1] libxl: add
> > > one machine property to support IGD GFX passthrough
> > >
> > > On Fri, Jan 30, 2015 at 08:56:48AM +0800, Chen, Tiejun wrote:
> > > [...]
> > > > >>>
> > > > >>>Just remember to handle old option in libxl if your old option
> > > > >>>is already released by some older version of QEMUs.
> > > > >>
> > > > >>I just drop that old option, -gfx_passthru, if we're under qemu
> > > > >>upstream circumstance, like this,
> > > > >>
> > > > >
> > > > >The question is, is there any version of qemu upstream that has
> > > > >been released that has the old option (-gfx-passthru)?
> > > >
> > > > No. Just now we're starting to support IGD passthrough in qemu upstream.
> > > >
> > >
> > > Right, as of QEMU 2.2.0 there's no support of IGD passthrough in
> > > QMEU upstream.
> > >
> >
> > Just a question:
> >Now what features do vt-d support? Thanks.
> >
> 
> I don't know whether vt-d is supported in qemu upstream.
> 
> But if there is support in upstream and you want to change some options, the
> same principle in my previous email still applies.
> 
> Wei.

Thanks.  -Quan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [ovmf test] 34018: regressions - FAIL

2015-01-31 Thread xen . org
flight 34018 ovmf real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/34018/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   5 xen-build fail REGR. vs. 33686
 build-i3865 xen-build fail REGR. vs. 33686

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvh-intel  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvh-amd   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-sedf  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pcipt-intel  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-winxpsp3   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-win7-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 build-check(1) blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 build-check(1)   blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 build-check(1)   blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 build-check(1)   blocked n/a

version targeted for testing:
 ovmf 7cc7022dfccadcae9e815d071916f96577e5df89
baseline version:
 ovmf 447d264115c476142f884af0be287622cd244423


People who touched revisions under test:
  "Gao, Liming" 
  "Long, Qin" 
  "Yao, Jiewen" 
  Aaron Pop 
  Abner Chang 
  Alex Williamson 
  Anderw Fish 
  Andrew Fish 
  Anthony PERARD 
  Ard Biesheuvel 
  Ari Zigler 
  Brendan Jackman 
  Bruce Cran 
  Cecil Sheng 
  Chao Zhang 
  Chao, Zhang 
  Chen Fan 
  Chris Phillips 
  Chris Ruffin 
  Cinnamon Shia 
  Daryl McDaniel  
  Daryl McDaniel 
  daryl.mcdaniel 
  daryl.mcdan...@intel.com
  darylm503 
  David Wei 
  David Woodhouse 
  Deric Cole 
  Dong Eric 
  Dong Guo 
  Dong, Guo 
  Elvin Li 
  Eric Dong 
  Eugene Cohen 
  Feng Tian 
  Feng, Bob C 
  Fu Siyuan 
  Fu, Siyuan 
  Gabriel Somlo 
  Gao, Liming 
  Gao, Liming liming.gao 
  Gao, Liming liming@intel.com
  Garrett Kirkendall 
  Gary Lin 
  Grzegorz Milos 
  Hao Wu 
  Harry Liebel 
  Hess Chen 
  Hot Tian 
  isakov-sl 
  isakov...@bk.ru
  Jaben Carsey 
  jcarsey 
  jcarsey 
  Jeff Bobzin (jeff.bobzin 
  Jeff Bobzin (

[Xen-devel] [ovmf test] 34039: regressions - FAIL

2015-01-31 Thread xen . org
flight 34039 ovmf real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/34039/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   5 xen-build fail REGR. vs. 33686
 build-i3865 xen-build fail REGR. vs. 33686

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvh-intel  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvh-amd   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-sedf  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pcipt-intel  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-winxpsp3   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-win7-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 build-check(1) blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 build-check(1)   blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 build-check(1)   blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 build-check(1)   blocked n/a

version targeted for testing:
 ovmf 7cc7022dfccadcae9e815d071916f96577e5df89
baseline version:
 ovmf 447d264115c476142f884af0be287622cd244423


People who touched revisions under test:
  "Gao, Liming" 
  "Long, Qin" 
  "Yao, Jiewen" 
  Aaron Pop 
  Abner Chang 
  Alex Williamson 
  Anderw Fish 
  Andrew Fish 
  Anthony PERARD 
  Ard Biesheuvel 
  Ari Zigler 
  Brendan Jackman 
  Bruce Cran 
  Cecil Sheng 
  Chao Zhang 
  Chao, Zhang 
  Chen Fan 
  Chris Phillips 
  Chris Ruffin 
  Cinnamon Shia 
  Daryl McDaniel  
  Daryl McDaniel 
  daryl.mcdaniel 
  daryl.mcdan...@intel.com
  darylm503 
  David Wei 
  David Woodhouse 
  Deric Cole 
  Dong Eric 
  Dong Guo 
  Dong, Guo 
  Elvin Li 
  Eric Dong 
  Eugene Cohen 
  Feng Tian 
  Feng, Bob C 
  Fu Siyuan 
  Fu, Siyuan 
  Gabriel Somlo 
  Gao, Liming 
  Gao, Liming liming.gao 
  Gao, Liming liming@intel.com
  Garrett Kirkendall 
  Gary Lin 
  Grzegorz Milos 
  Hao Wu 
  Harry Liebel 
  Hess Chen 
  Hot Tian 
  isakov-sl 
  isakov...@bk.ru
  Jaben Carsey 
  jcarsey 
  jcarsey 
  Jeff Bobzin (jeff.bobzin 
  Jeff Bobzin (

[Xen-devel] [PATCH OSSTEST] ts-xen-build-prep: install nasm

2015-01-31 Thread Wei Liu
OVMF requires nasm to build.

Signed-off-by: Wei Liu 
Cc: Ian Campbell 
Cc: Ian Jackson 
---
 ts-xen-build-prep | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-xen-build-prep b/ts-xen-build-prep
index a7d0d03..24a8b25 100755
--- a/ts-xen-build-prep
+++ b/ts-xen-build-prep
@@ -178,7 +178,7 @@ sub prep () {
autoconf automake libtool xsltproc
libxml2-utils libxml2-dev libnl-dev
libdevmapper-dev w3c-dtd-xhtml libxml-xpath-perl
-  ccache));
+   ccache nasm));
 
 target_cmd_root($ho, "chmod -R a+r /usr/share/git-core/templates");
 # workaround for Debian #595728
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [qemu-upstream-unstable test] 34011: regressions - FAIL

2015-01-31 Thread xen . org
flight 34011 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/34011/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-i386 11 guest-localmigrate  fail REGR. vs. 33488
 test-amd64-i386-freebsd10-amd64 11 guest-localmigrate fail REGR. vs. 33488
 test-amd64-i386-xl-win7-amd64 10 guest-localmigrate   fail REGR. vs. 33488
 test-amd64-amd64-xl-winxpsp3 10 guest-localmigratefail REGR. vs. 33488
 test-amd64-amd64-xl-win7-amd64 10 guest-localmigrate  fail REGR. vs. 33488
 test-amd64-i386-xl-winxpsp3-vcpus1 10 guest-localmigrate  fail REGR. vs. 33488
 test-amd64-i386-xl-winxpsp3  10 guest-localmigratefail REGR. vs. 33488

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 guest-localmigrate fail REGR. vs. 
33488
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 guest-localmigrate fail REGR. vs. 
33488
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 guest-localmigrate fail REGR. vs. 33488
 test-amd64-i386-xl-qemuu-win7-amd64 10 guest-localmigrate fail REGR. vs. 33488
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-localmigrate fail REGR. vs. 
33488
 test-amd64-i386-xl-qemuu-winxpsp3 10 guest-localmigrate   fail REGR. vs. 33488
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 guest-localmigrate fail REGR. vs. 33488
 test-amd64-amd64-xl-qemuu-win7-amd64 10 guest-localmigrate fail REGR. vs. 33488
 test-amd64-amd64-xl-qemuu-winxpsp3 10 guest-localmigrate  fail REGR. vs. 33488

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt   9 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel  9 guest-start  fail  never pass
 test-armhf-armhf-xl-sedf 10 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-sedf-pin 10 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 10 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-midway   10 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  10 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt  9 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-amd   9 guest-start  fail   never pass
 test-armhf-armhf-libvirt  9 guest-start  fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start fail never pass
 test-armhf-armhf-xl-credit2   5 xen-boot fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3 14 guest-stopfail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop fail never pass

version targeted for testing:
 qemuube11dc1e9172f91e798a8f831b30c14b479e08e8
baseline version:
 qemuu0d37748342e29854db7c9f6c47d7f58c6cfba6b2


People who touched revisions under test:
  Don Slutz 
  Paul Durrant 
  Stefano Stabellini 


jobs:
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-amd64-xl-pvh-amd  fail
 test-amd64-i386-rhel6hvm-amd pass
 test-amd64-i386-qemut-rhel6hvm-amd   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64pass
 test-amd64-i386-xl-qemut-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64fail
 test-amd64-i386-xl-qemuu-debianhvm-amd64 fail
 test-amd64-i386-freebsd10-amd64  fail
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  

[Xen-devel] [qemu-mainline bisection] complete test-amd64-i386-rhel6hvm-intel

2015-01-31 Thread xen . org
branch xen-unstable
xen branch xen-unstable
job test-amd64-i386-rhel6hvm-intel
test redhat-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-unstable.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  b3a4755a67a52aa7297eb8927b482d09dabdefec
  Bug not present: a805ca54015bd123e2bc2454ec59619d0ed106c2


  commit b3a4755a67a52aa7297eb8927b482d09dabdefec
  Merge: a805ca5 4478aa7
  Author: Peter Maydell 
  Date:   Thu Jan 22 12:14:19 2015 +
  
  Merge remote-tracking branch 'remotes/kraxel/tags/pull-vnc-20150122-1' 
into staging
  
  vnc: add support for multiple vnc displays
  
  # gpg: Signature made Thu 22 Jan 2015 11:00:54 GMT using RSA key ID 
D3E87138
  # gpg: Good signature from "Gerd Hoffmann (work) "
  # gpg: aka "Gerd Hoffmann "
  # gpg: aka "Gerd Hoffmann (private) "
  
  * remotes/kraxel/tags/pull-vnc-20150122-1:
monitor: add vnc websockets
monitor: add query-vnc-servers command
vnc: factor out qmp_query_client_list
vnc: track & limit connections
vnc: update docs/multiseat.txt
vnc: allow binding servers to qemu consoles
vnc: switch to QemuOpts, allow multiple servers
vnc: add display id to acl names
vnc: remove unused DisplayState parameter, add id instead.
vnc: remove vnc_display global
  
  Signed-off-by: Peter Maydell 
  
  commit 4478aa768ccefcc5b234c23d035435fd71b932f6
  Author: Gerd Hoffmann 
  Date:   Wed Dec 10 09:49:39 2014 +0100
  
  monitor: add vnc websockets
  
  Add websockets bool to VncBasicInfo, report websocket server sockets,
  flag websocket client connections.
  
  Signed-off-by: Gerd Hoffmann 
  
  commit df887684603a4b3b0c623090a6b419dc70f22c32
  Author: Gerd Hoffmann 
  Date:   Wed Dec 17 15:49:44 2014 +0100
  
  monitor: add query-vnc-servers command
  
  Add new query vnc qmp command, for the lack of better ideas just name it
  "query-vnc-servers".  Changes over query-vnc:
  
   * It returns a list of vnc servers, so multiple vnc server instances
 are covered.
   * Each vnc server returns a list of server sockets.  Followup patch
 will use that to also report websockets.  In case we add support for
 multiple server sockets server sockets (to better support ipv4+ipv6
 dualstack) we can add them to the list too.
  
  Signed-off-by: Gerd Hoffmann 
  
  commit 2d29a4368c3c00a5cf200f29b3dfd32bc4fb2c31
  Author: Gerd Hoffmann 
  Date:   Tue Dec 9 15:27:39 2014 +0100
  
  vnc: factor out qmp_query_client_list
  
  so we can reuse it for the new vnc query command.
  
  Signed-off-by: Gerd Hoffmann 
  
  commit e5f34cdd2da54f28d90889a3afd15fad2d6105ff
  Author: Gerd Hoffmann 
  Date:   Thu Oct 2 12:09:34 2014 +0200
  
  vnc: track & limit connections
  
  Also track the number of connections in "connecting" and "shared" state
  (in addition to the "exclusive" state).  Apply a configurable limit to
  these connections.
  
  The logic to apply the limit to connections in "shared" state is pretty
  simple:  When the limit is reached no new connections are allowed.
  
  The logic to apply the limit to connections in "connecting" state (this
  is the state you are in *before* successful authentication) is
  slightly different:  A new connect kicks out the oldest client which is
  still in "connecting" state.  This avoids a easy DoS by unauthenticated
  users by simply opening connections until the limit is reached.
  
  Cc: Dr. David Alan Gilbert 
  Signed-off-by: Gerd Hoffmann 
  
  commit 86fdcf23f4a9d8473844734907555b3a93ed686c
  Author: Gerd Hoffmann 
  Date:   Thu Oct 2 15:53:37 2014 +0200
  
  vnc: update docs/multiseat.txt
  
  vnc joins the party ;)
  Also some s/head/seat/ to clarify.
  
  Signed-off-by: Gerd Hoffmann 
  
  commit 1d0d59fe291967533f974e82213656d479475a1e
  Author: Gerd Hoffmann 
  Date:   Thu Sep 18 12:54:49 2014 +0200
  
  vnc: allow binding servers to qemu consoles
  
  This patch adds a display= parameter to the vnc options.  This allows to
  bind a vnc server instance to a specific display, allowing to create a
  multiseat setup with a vnc server for each seat.
  
  Signed-off-by: Gerd Hoffmann 
  
  commit 4db14629c38611061fc19ec6927405923de84f08
  Author: Gerd Hoffmann 
  Date:   Tue Sep 16 12:33:03 2014 +0200
  
  vnc: switch to QemuOpts, allow multiple servers
  
  This patch switches vnc over to QemuOpts, and it (more or less
  as side effect) allows multiple vnc server i

Re: [Xen-devel] [PATCH 1/2] sched: credit2: respect per-vcpu hard affinity

2015-01-31 Thread Justin Weaver
On Mon, Jan 19, 2015 at 9:21 PM, Justin Weaver  wrote:
> On Mon, Jan 12, 2015 at 8:05 AM, Dario Faggioli
>  wrote:

>>>  if ( __vcpu_on_runq(svc) )
>>> +on_runq = 1;
>>> +
>>> +/* If the runqs are different, move svc to trqd. */
>>> +if ( svc->rqd != trqd )
>>>  {
>>> -__runq_remove(svc);
>>> -update_load(ops, svc->rqd, svc, -1, now);
>>> -on_runq=1;
>>> +if ( on_runq )
>>> +{
>>> +__runq_remove(svc);
>>> +update_load(ops, svc->rqd, svc, -1, now);
>>> +}
>>> +__runq_deassign(svc);
>>> +__runq_assign(svc, trqd);
>>> +if ( on_runq )
>>> +{
>>> +update_load(ops, svc->rqd, svc, 1, now);
>>> +runq_insert(ops, svc->vcpu->processor, svc);
>>> +}
>>>  }
>>> -__runq_deassign(svc);
>>> -svc->vcpu->processor = cpumask_any(&trqd->active);
>>> -__runq_assign(svc, trqd);
>>> +
>>>
>> Mmm.. I do not like the way the code looks after this is applied. Before
>> the patch, it was really straightforward and easy to understand. Now
>> it's way more involved. Can you explain why this rework is necessary?
>> For now do it here, then we'll see whether and how to put that into a
>> doc comment.
>
> When I was testing, if I removed hard affinity from a vcpu's current
> pcpu to another pcpu in the same run queue, the VM would stop
> executing. I'll go back and look at this because I see what you wrote
> below about wake being called by vcpu_migrate in schedule.c; it
> shouldn't freeze on the old cpu, it should wake on the new cpu no
> matter if the run queue changed or not. I'll address this again after
> some testing.

>>> @@ -1399,8 +1531,12 @@ csched2_vcpu_migrate(
>>>
>>>  trqd = RQD(ops, new_cpu);
>>>
>>> -if ( trqd != svc->rqd )
>>> -migrate(ops, svc, trqd, NOW());
>>> +/*
>>> + * Call migrate even if svc->rqd == trqd; there may have been an
>>> + * affinity change that requires a call to runq_tickle for a new
>>> + * processor within the same run queue.
>>> + */
>>> +migrate(ops, svc, trqd, NOW());
>>>  }
>>>
>> As said above, I don't think I see the reason for this. Affinity
>> changes, e.g., due to calls to vcpu_set_affinity() in schedule.c, forces
>> the vcpu through a sleep wakeup cycle (it calls vcpu_sleep_nosync()
>> direcly, while vcpu_wake() is called inside vcpu_migrate()).
>>
>> So, looks like what you are after (i.e., runq_tickle being called)
>> should happen already, isn't it? Are there other reasons you need it
>> for?
>
> Like I said above, I will look at this again. My VMs were getting
> stuck after certain hard affinity changes. I'll roll back some of
> these changes and test it out again.

I discovered that SCHED_OP(VCPU2OP(v), wake, v); in function vcpu_wake
in schedule.c is not being called because v's pause flags has
_VPF_blocked set.

For example...
I start a guest with one vcpu with hard affinity 8 - 15 and xl
vcpu-list says it's running on pcpu 15
I run xl vcpu-pin 1 0 8 to change it to hard affinity only with pcpu 8
When it gets to vcpu_wake, it tests vcpu_runnable(v) which is false
because _VPF_blocked is set, so it skips the call to
SCHED_OP(VCPU2OP(v), wake, v); and so does not get a runq_tickle
xl vcpu-list now shows --- for the state and I cannot console into it
What I don't understand though is if I then enter xl vcpu-pin 1 0 15
it reports that _VPF_blocked is NOT set, vcpu_wake calls credit2's
wake, it gets a runq_tickle and everything is fine again
Why did the value of the _VPF_blocked flag change after I entered xl
vcpu-pin the second time?? I dove deep in the code and could not
figure it out.

So that is why v1 of my patch worked because I let it run migrate
during an affinity change even if the current and destination run
queues were the same, so it would do the processor assignment and
runq_tickle regardless. I think you'll have to tell me if that's a
hack or a good solution!

I greatly appreciate any feedback.

Thank you,
Justin

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel