[Xen-devel] [ovmf test] 85429: regressions - FAIL

2016-03-06 Thread osstest service owner
flight 85429 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/85429/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 9 debian-hvm-install fail REGR. vs. 65543
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install fail REGR. vs. 65543

version targeted for testing:
 ovmf 2c328aca1d5dfd99e7541d85db63318be3d4da62
baseline version:
 ovmf 5ac96e3a28dd26eabee421919f67fa7c443a47f1

Last test of basis65543  2015-12-08 08:45:15 Z   88 days
Failing since 65593  2015-12-08 23:44:51 Z   88 days   92 attempts
Testing same since85429  2016-03-05 07:47:36 Z1 days1 attempts


People who touched revisions under test:
  "Samer El-Haj-Mahmoud" 
  "Yao, Jiewen" 
  Alcantara, Paulo 
  Anbazhagan Baraneedharan 
  Andrew Fish 
  Ard Biesheuvel 
  Arthur Crippa Burigo 
  Cecil Sheng 
  Chao Zhang 
  Charles Duffy 
  Cinnamon Shia 
  Cohen, Eugene 
  Dandan Bi 
  Daocheng Bu 
  Daryl McDaniel 
  edk2 dev 
  edk2-devel 
  Eric Dong 
  Eric Dong 
  Eugene Cohen 
  Evan Lloyd 
  Feng Tian 
  Fu Siyuan 
  Hao Wu 
  Haojian Zhuang 
  Hess Chen 
  Heyi Guo 
  Jaben Carsey 
  Jeff Fan 
  Jiaxin Wu 
  jiewen yao 
  Jim Dailey 
  jim_dai...@dell.com 
  Jordan Justen 
  Karyne Mayer 
  Larry Hauch 
  Laszlo Ersek 
  Leahy, Leroy P 
  Lee Leahy 
  Leekha Shaveta 
  Leif Lindholm 
  Liming Gao 
  Mark Rutland 
  Marvin Haeuser 
  Michael Kinney 
  Michael LeMay 
  Michael Thomas 
  Ni, Ruiyu 
  Paolo Bonzini 
  Paulo Alcantara 
  Paulo Alcantara Cavalcanti 
  Qin Long 
  Qiu Shumin 
  Rodrigo Dias Correa 
  Ruiyu Ni 
  Ryan Harkin 
  Samer El-Haj-Mahmoud 
  Samer El-Haj-Mahmoud 
  Star Zeng 
  Supreeth Venkatesh 
  Tapan Shah 
  Tian, Feng 
  Vladislav Vovchenko 
  Yao Jiewen 
  Yao, Jiewen 
  Ye Ting 
  Yonghong Zhu 
  Zhang Lubo 
  Zhang, Chao B 
  Zhangfei Gao 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 11660 lines long.)

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable-coverity test] 85551: all pass - PUSHED

2016-03-06 Thread osstest service owner
flight 85551 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/85551/

Perfect :-)
All tests in this flight passed
version targeted for testing:
 xen  1bd52e1fd66c47af690124d74d11ccb271c96f6b
baseline version:
 xen  abf8824fe530bcf060c757596f68663c87546a6a

Last test of basis84355  2016-02-28 09:19:08 Z7 days
Failing since 85044  2016-03-02 10:04:47 Z4 days2 attempts
Testing same since85551  2016-03-06 09:19:32 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Aravind Gopalakrishnan 
  Bob Moore 
  Boris Ostrovsky 
  Boris Ostrovsky  for SVM bits
  Corneliu ZUZU 
  Dario Faggioli 
  David Vrabel 
  Doug Goldstein 
  Feng Wu 
  George Dunlap 
  George Dunlap 
  Hanjun Guo 
  Haozhong Zhang 
  Ian Campbell 
  Ian Jackson 
  Jan Beulich 
  Juergen Gross 
  Kevin Tian 
  Liang Li 
  Liang Z Li 
  Naresh Bhat 
  Parth Dixit 
  Paul Durrant 
  Razvan Cojocaru 
  Shannon Zhao 
  Shannon Zhao 
  Stefano Stabellini 
  Tamas K Lengyel 
  Tim Deegan 
  Tomasz Nowicki 
  Wei Liu 
  Wen Congyang 
  Yang Hongyang 
  Yang Hongyang 

jobs:
 coverity-amd64   pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable-coverity
+ revision=1bd52e1fd66c47af690124d74d11ccb271c96f6b
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push 
xen-unstable-coverity 1bd52e1fd66c47af690124d74d11ccb271c96f6b
+ branch=xen-unstable-coverity
+ revision=1bd52e1fd66c47af690124d74d11ccb271c96f6b
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable-coverity
+ qemuubranch=qemu-upstream-unstable-coverity
+ qemuubranch=qemu-upstream-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ '[' xqemu-upstream-unstable = x ']'
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable-coverity
+ prevxenbranch=xen-unstable
+ '[' x1bd52e1fd66c47af690124d74d11ccb271c96f6b = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/rumpuser-xen.git
+++ besteffort_repo https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ cached_repo https://github.com/rumpkernel/rumpkernel-netbsd-src 
'[fetch=try]'
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local 'options=[fetch=try]'
 getconfig GitCacheProxy
 perl -e '
use Osstest;
readglobalconfig();
print $c{"GitCacheProxy"} or die $!;
'
+++ local cache=git://cache:9419/
+++ '[' xg

[Xen-devel] Behaviour when setting CPU_BASED_MONITOR_TRAP_FLAG in hvm_do_resume()

2016-03-06 Thread Razvan Cojocaru
Hello,

Assuming I set v->arch.hvm_vmx.exec_control |=
CPU_BASED_MONITOR_TRAP_FLAG; in hvm_do_resume(), would that cause a
VMEXIT with EXIT_REASON_MONITOR_TRAP_FLAG _before_ the instruction at he
current rIP runs, or _after_ it?

A few tests I've ran suggest that the VMEXIT occurs _before_, i.e. the
instruction is not running between setting the flag and the VMEXIT, but
the actual code is a bit more involved and I might have just come across
a corner case, so I thought it would be best to have official
confirmation on the list.


Thanks,
Razvan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [linux-4.1 test] 85470: regressions - FAIL

2016-03-06 Thread osstest service owner
flight 85470 linux-4.1 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/85470/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-rumpuserxen   6 xen-build fail REGR. vs. 66399
 build-i386-rumpuserxen6 xen-build fail REGR. vs. 66399
 test-armhf-armhf-xl-xsm  15 guest-start/debian.repeat fail REGR. vs. 66399
 test-armhf-armhf-xl-credit2  15 guest-start/debian.repeat fail REGR. vs. 66399
 test-armhf-armhf-xl  15 guest-start/debian.repeat fail REGR. vs. 66399
 test-armhf-armhf-xl-multivcpu 16 guest-start.2   fail in 82991 REGR. vs. 66399
 test-armhf-armhf-xl-cubietruck 15 guest-start/debian.repeat fail in 85331 
REGR. vs. 66399

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl  17 guest-localmigrate/x10 fail in 84906 pass in 85470
 test-armhf-armhf-libvirt-xsm  6 xen-boot   fail in 85331 pass in 85470
 test-armhf-armhf-xl-xsm   9 debian-install fail in 85331 pass in 85470
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 15 guest-localmigrate/x10 fail in 
85331 pass in 85470
 test-armhf-armhf-xl-multivcpu 15 guest-start/debian.repeat  fail pass in 82991
 test-armhf-armhf-xl-rtds 11 guest-start fail pass in 84906
 test-armhf-armhf-xl-cubietruck 11 guest-start   fail pass in 85331

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeat fail in 84906 like 66399
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 66399
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 66399
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 66399
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 66399
 test-armhf-armhf-xl-vhd   9 debian-di-installfail   like 66399

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds 13 saverestore-support-check fail in 84906 never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-check fail in 84906 never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-check fail in 85331 never 
pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-check fail in 85331 
never pass
 test-amd64-amd64-xl-pvh-intel 14 guest-saverestorefail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-armhf-armhf-libvirt-raw  9 debian-di-installfail   never pass

version targeted for testing:
 linux83fdace666f72dbfc4a7681a04e3689b61dae3b9
baseline version:
 linux07cc49f66973f49a391c91bf4b158fa0f2562ca8

Last test of basis66399  2015-12-15 18:20:39 Z   81 days
Failing since 78925  2016-01-24 13:50:39 Z   41 days   42 attempts
Testing same s

[Xen-devel] [xen-4.3-testing test] 85479: regressions - trouble: blocked/broken/fail/pass

2016-03-06 Thread osstest service owner
flight 85479 xen-4.3-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/85479/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-pvops 3 host-install(3) broken REGR. vs. 83004
 build-armhf   3 host-install(3) broken REGR. vs. 83004
 test-amd64-amd64-xl-qemut-debianhvm-amd64 9 debian-hvm-install fail REGR. vs. 
83004
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 9 debian-hvm-install fail REGR. vs. 
83004
 test-amd64-i386-xl-qemut-debianhvm-amd64 9 debian-hvm-install fail REGR. vs. 
83004
 test-amd64-i386-xl-qemuu-debianhvm-amd64 9 debian-hvm-install fail REGR. vs. 
83004

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-localmigrate/x10 fail in 85336 
pass in 85479
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-localmigrate fail pass in 
85336

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 83004
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 83004
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 83004

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  9 debian-hvm-install fail never pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install  fail never pass
 build-amd64-rumpuserxen   6 xen-buildfail   never pass
 build-i386-rumpuserxen6 xen-buildfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 20 leak-check/checkfail never pass

version targeted for testing:
 xen  404e83e055cb419efccbcb0c5c89476307a9ae46
baseline version:
 xen  ccc7adf9cff5d5f93720afcc1d0f7227d50feab2

Last test of basis83004  2016-02-18 14:47:44 Z   16 days
Testing same since84923  2016-03-01 13:41:07 Z5 days5 attempts


People who touched revisions under test:
  Ian Campbell 
  Ian Jackson 
  Wei Liu 

jobs:
 build-amd64  pass
 build-armhf  broken  
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  blocked 
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopsbroken  
 build-i386-pvops pass
 build-amd64-rumpuserxen  fail
 build-i386-rumpuserxen   fail
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  blocked 
 test-amd64-i386-xl   pass
 test-amd64-i386-qemut-rhel6hvm-amd   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64fail
 test-amd64-i386-xl-qemut-debianhvm-amd64 fail
 test-amd64-amd64-xl-qemuu-debianhvm-amd64fail
 test-amd64-i386-xl-qemuu-debianhvm-amd64 fail
 test-amd64-i386-freebsd10-amd64  pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  

[Xen-devel] [xen-unstable baseline-only test] 44225: tolerable FAIL

2016-03-06 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 44225 xen-unstable real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/44225/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 build-i386-rumpuserxen6 xen-buildfail   like 44216
 build-amd64-rumpuserxen   6 xen-buildfail   like 44216
 test-amd64-amd64-xl-credit2  19 guest-start/debian.repeatfail   like 44216
 test-amd64-amd64-xl-xsm  19 guest-start/debian.repeatfail   like 44216
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 44216
 test-amd64-amd64-qemuu-nested-intel 16 debian-hvm-install/l1/l2 fail like 44216

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-midway   13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-midway   12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass

version targeted for testing:
 xen  1bd52e1fd66c47af690124d74d11ccb271c96f6b
baseline version:
 xen  3f19ca9ad0b66c57c91921dc8a695634eee0c679

Last test of basis44216  2016-03-03 20:57:55 Z2 days
Testing same since44225  2016-03-06 05:52:35 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Aravind Gopalakrishnan 
  Boris Ostrovsky 
  Dario Faggioli 
  Hanjun Guo 
  Jan Beulich 
  Juergen Gross 
  Naresh Bhat 
  Parth Dixit 
  Paul Durrant 
  Shannon Zhao 
  Shannon Zhao 
  Stefano Stabellini 
  Tim Deegan 
  Tomasz Nowicki 
  Yang Hongyang 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt

[Xen-devel] [linux-mingo-tip-master test] 85494: regressions - FAIL

2016-03-06 Thread osstest service owner
flight 85494 linux-mingo-tip-master real [real]
http://logs.test-lab.xenproject.org/osstest/logs/85494/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-rumpuserxen6 xen-build fail REGR. vs. 60684
 build-amd64-rumpuserxen   6 xen-build fail REGR. vs. 60684
 test-amd64-amd64-xl-multivcpu 15 guest-localmigrate   fail REGR. vs. 60684
 test-amd64-amd64-libvirt 15 guest-saverestore.2   fail REGR. vs. 60684
 test-amd64-amd64-xl-xsm  15 guest-localmigratefail REGR. vs. 60684
 test-amd64-amd64-xl  15 guest-localmigratefail REGR. vs. 60684
 test-amd64-amd64-libvirt-xsm 15 guest-saverestore.2   fail REGR. vs. 60684
 test-amd64-amd64-pair  22 guest-migrate/dst_host/src_host fail REGR. vs. 60684
 test-amd64-amd64-xl-credit2  15 guest-localmigratefail REGR. vs. 60684

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds 17 guest-localmigrate/x10fail REGR. vs. 60684
 test-amd64-i386-libvirt-xsm  15 guest-saverestore.2  fail blocked in 60684
 test-amd64-i386-xl   15 guest-localmigrate   fail blocked in 60684
 test-amd64-i386-libvirt  15 guest-saverestore.2  fail blocked in 60684
 test-amd64-i386-xl-xsm   15 guest-localmigrate   fail blocked in 60684
 test-amd64-amd64-libvirt-pair 22 guest-migrate/dst_host/src_host fail blocked 
in 60684
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop   fail blocked in 60684
 test-amd64-i386-pair  22 guest-migrate/dst_host/src_host fail blocked in 60684
 test-amd64-i386-libvirt-pair 22 guest-migrate/dst_host/src_host fail blocked 
in 60684
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 60684
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 60684

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-intel 14 guest-saverestorefail  never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-intel 13 xen-boot/l1 fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 13 xen-boot/l1   fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass

version targeted for testing:
 linuxf71c92a4f1a703fad3366fb51347bd8700a172b7
baseline version:
 linux69f75ebe3b1d1e636c4ce0a0ee248edacc69cbe0

Last test of basis60684  2015-08-13 04:21:46 Z  206 days
Failing since 60712  2015-08-15 18:33:48 Z  203 days  148 attempts
Testing same since85494  2016-03-05 20:24:40 Z0 days1 attempts

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  fail
 build-i386-rumpuserxen   fail
 test-amd64-amd64-xl  fail
 test-amd64-i386-xl   fail
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm pass
 test-amd64-amd64-libvirt-xsm fail
 test-amd64-i386-libvirt-xsm  fail
 test-amd64-amd64-xl-xsm   

[Xen-devel] [PATCH v6 for Xen 4.7 0/4] Enable per-VCPU parameter settings for RTDS scheduler

2016-03-06 Thread Chong Li
[Goal]
The current xl sched-rtds tool can only set the VCPUs of a domain 
to the same parameter although the scheduler supports VCPUs with 
different parameters. This patchset is to enable xl sched-rtds 
tool to configure the VCPUs of a domain with different parameters.

This per-VCPU settings can be used in many scenarios. For example,
based on Dario's statement in our pervious discussion
(http://lists.xen.org/archives/html/xen-devel/2014-09/msg00423.html), 
if there are two real-time applications, which have different timing 
requirements, running in a multi-VCPU guest domain, it is beneficial 
to pin these two applications to two seperate VCPUs with different 
scheduling parameters.

What this patchset includes is a wanted and planned feature for RTDS 
scheudler(http://wiki.xenproject.org/wiki/RTDS-Based-Scheduler) in 
Xen 4.7. The interface design of the xl sched-rtds tool is based on 
Meng's previous discussion with Dario, George and Wei
(http://lists.xen.org/archives/html/xen-devel/2015-02/msg02606.html).
Basically, there are two main changes:

1) in xl, we create an array that records all VCPUs whose parameters 
are about to modify or output.

2) in libxl, we receive the array and call different xc functions to 
handle it.

3) in xen and libxc, we use 
XEN_DOMCTL_SCHEDOP_getvcpuinfo/putvcpuinfo(introduced by this
patchset) as the hypercall for per-VCPU operations(get/set method).


[Usage]
With this patchset in use, xl sched-rtds tool can:

1) show the budget and period of each VCPU of each domain, 
by using "xl sched-rtds -v all" command. An example would be like:

# xl sched-rtds -v all
Cpupool Pool-0: sched=RTDS
NameID VCPUPeriodBudget
Domain-0 00 1  4000
vm1  10   300   150
vm1  11   400   200
vm1  12 1  4000
vm1  13  1000   500
vm2  20 1  4000
vm2  21 1  4000

Using "xl sched-rtds" will output the default scheduling parameters
for each domain. An example would be like:

# xl sched-rtds
Cpupool Pool-0: sched=RTDS
NameIDPeriodBudget
Domain-0 0 1  4000
vm1  1 1  4000
vm2  2 1  4000


2) show the budget and period of each VCPU of a specific domain, 
by using, e.g., "xl sched-rtds -d vm1 -v all" command. The output 
would be like:

# xl sched-rtds -d vm1 -v all
NameID VCPUPeriodBudget
vm1  10   300   150
vm1  11   400   200
vm1  12 1  4000
vm1  13  1000   500

To show a subset of the parameters of the VCPUs of a specific domain, 
please use, e.g.,"xl sched-rtds -d vm1 -v 0 -v 3" command. 
The output would be:

# xl sched-rtds -d vm1 -v 0 -v 3
NameID VCPUPeriodBudget
vm1  10   300   150
vm1  13  1000   500

Using command, e.g., "xl sched-rtds -d vm1" will output the default
scheduling parameters of vm1. An example would be like:

# xl sched-rtds -d vm1
NameIDPeriodBudget
vm1  1 1  4000


3) Users can set the budget and period of multiple VCPUs of a 
specific domain with only one command, 
e.g., "xl sched-rtds -d vm1 -v 0 -p 100 -b 50 -v 3 -p 300 -b 150".

Users can set all VCPUs with the same parameters, by one command.
e.g., "xl sched-rtds -d vm1 -v all -p 500 -b 250".


---
Previous conclusion:
On PATCH v4, our concern was about the usage of hypercall_preemption_check
and the print of warning message (both in xen). These issues are addressed 
in this patch.


CC: 
CC: 
CC: 
CC: 
CC: 
CC: 
CC: 
CC: 
CC: 



Chong Li (4):
  xen: enable per-VCPU parameter settings for RTDS scheduler
  libxc: enable per-VCPU parameter settings for RTDS scheduler
  libxl: enable per-VCPU parameter settings for RTDS scheduler
  xl: enable per-VCPU parameter settings for RTDS scheduler

 docs/man/xl.pod.1 |   4 +
 tools/libxc/include/xenctrl.h |  16 ++-
 tools/libxc/xc_rt.c   |  68 +
 tools/libxl/libxl.c   | 326 +++---
 tools/libxl/libxl.h   |  37 +
 tools/libxl/libxl_types.idl   |  14 ++
 tools/libxl/xl_cmdimpl.c  | 292 -
 tools/libxl/xl_cmdtable.c |  10 +-
 xen/common/sched_credit.c |   4 +
 xen/common/sched_credit2.c|   4 +
 xen/common/sched_rt

[Xen-devel] [PATCH v6 for Xen 4.7 3/4] libxl: enable per-VCPU parameter settings for RTDS scheduler

2016-03-06 Thread Chong Li
Add libxl_vcpu_sched_params_get/set and sched_rtds_vcpu_get/set
functions to support per-VCPU settings.

Signed-off-by: Chong Li 
Signed-off-by: Meng Xu 
Signed-off-by: Sisu Xi 

---
Changes on PATCH v5:
1) Add a seperate function, sched_rtds_vcpus_params_set_all(), to set
the parameters of all vcpus of a domain.

2) Add libxl_vcpu_sched_params_set_all() to invoke the above function.

3) Coding style changes. (I didn't find the indentation rules for function
calls with long parameters (still 4 spaces?), so I just imitated the
indentation style of some existing functions)

Changes on PATCH v4:
1) Coding style changes

Changes on PATCH v3:
1) Add sanity check on vcpuid

2) Add comments on per-domain and per-vcpu functions for libxl
users

Changes on PATCH v2:
1) New data structure (libxl_vcpu_sched_params and libxl_sched_params)
to help per-VCPU settings.

2) sched_rtds_vcpu_get now can return a random subset of the parameters
of the VCPUs of a specific domain.

CC: 
CC: 
CC: 
CC: 
CC: 
CC: 
CC: 
CC: 
---
 tools/libxl/libxl.c | 326 
 tools/libxl/libxl.h |  37 +
 tools/libxl/libxl_types.idl |  14 ++
 3 files changed, 354 insertions(+), 23 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index bd3aac8..4532e86 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -5770,6 +5770,207 @@ static int sched_credit2_domain_set(libxl__gc *gc, 
uint32_t domid,
 return 0;
 }
 
+static int sched_rtds_validate_params(libxl__gc *gc, int period,
+int budget, uint32_t *sdom_period,
+uint32_t *sdom_budget)
+{
+int rc = 0;
+if (period != LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT) {
+if (period < 1) {
+LOG(ERROR, "VCPU period is out of range, "
+   "valid values are larger than or equal to 1");
+rc = ERROR_INVAL; /* error scheduling parameter */
+goto out;
+}
+*sdom_period = period;
+}
+
+if (budget != LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT) {
+if (budget < 1) {
+LOG(ERROR, "VCPU budget is not set or out of range, "
+   "valid values are larger than or equal to 1");
+rc = ERROR_INVAL;
+goto out;
+}
+*sdom_budget = budget;
+}
+
+if (*sdom_budget > *sdom_period) {
+LOG(ERROR, "VCPU budget must be smaller than "
+   "or equal to VCPU period");
+rc = ERROR_INVAL;
+}
+out:
+return rc;
+}
+
+/* Get the RTDS scheduling parameters of vcpu(s) */
+static int sched_rtds_vcpu_get(libxl__gc *gc, uint32_t domid,
+   libxl_vcpu_sched_params *scinfo)
+{
+uint32_t num_vcpus;
+int i, r, rc;
+xc_dominfo_t info;
+struct xen_domctl_schedparam_vcpu *vcpus;
+
+r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+if (r < 0) {
+LOGE(ERROR, "getting domain info");
+rc = ERROR_FAIL;
+goto out;
+}
+
+num_vcpus = scinfo->num_vcpus ? scinfo->num_vcpus :
+info.max_vcpu_id + 1;
+
+GCNEW_ARRAY(vcpus, num_vcpus);
+
+if (scinfo->num_vcpus > 0) {
+for (i = 0; i < num_vcpus; i++) {
+if (scinfo->vcpus[i].vcpuid < 0 ||
+scinfo->vcpus[i].vcpuid > info.max_vcpu_id) {
+LOG(ERROR, "VCPU index is out of range, "
+   "valid values are within range from 0 to %d",
+   info.max_vcpu_id);
+rc = ERROR_INVAL;
+goto out;
+}
+vcpus[i].vcpuid = scinfo->vcpus[i].vcpuid;
+}
+} else
+for (i = 0; i < num_vcpus; i++)
+vcpus[i].vcpuid = i;
+
+r = xc_sched_rtds_vcpu_get(CTX->xch, domid, vcpus, num_vcpus);
+if (r != 0) {
+LOGE(ERROR, "getting vcpu sched rtds");
+rc = ERROR_FAIL;
+goto out;
+}
+scinfo->sched = LIBXL_SCHEDULER_RTDS;
+if (scinfo->num_vcpus == 0) {
+scinfo->num_vcpus = num_vcpus;
+scinfo->vcpus = libxl__calloc(NOGC, num_vcpus,
+sizeof(libxl_sched_params));
+}
+for(i = 0; i < num_vcpus; i++) {
+scinfo->vcpus[i].period = vcpus[i].s.rtds.period;
+scinfo->vcpus[i].budget = vcpus[i].s.rtds.budget;
+scinfo->vcpus[i].vcpuid = vcpus[i].vcpuid;
+}
+return r;
+out:
+return rc;
+}
+
+/* Set the RTDS scheduling parameters of vcpu(s) */
+static int sched_rtds_vcpus_params_set(libxl__gc *gc, uint32_t domid,
+   const libxl_vcpu_sched_params *scinfo)
+{
+int r, rc;
+int i;
+uint16_t max_vcpuid;
+xc_dominfo_t info;
+struct xen_domctl_schedparam_vcpu *vcpus;
+uint32_t num_vcpus;
+
+r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+if (r < 0) {
+LOGE(ERROR, "getting domain info");
+rc = ERROR_FAIL;
+  

[Xen-devel] [PATCH v6 for Xen 4.7 2/4] libxc: enable per-VCPU parameter settings for RTDS scheduler

2016-03-06 Thread Chong Li
Add xc_sched_rtds_vcpu_get/set functions to interact with
Xen to get/set a domain's per-VCPU parameters.

Signed-off-by: Chong Li 
Signed-off-by: Meng Xu 
Signed-off-by: Sisu Xi 

---
Changes on PATCH v5:
1) In xc_sched_rtds_vcpu_get/set, re-issueing the hypercall
if it is preempted.

Changes on PATCH v4:
1) Minor modifications on the function parameters.

Changes on PATCH v2:
1) Minor modifications due to the change of struct xen_domctl_scheduler_op.

CC: 
CC: 
CC: 
CC: 
CC: 
CC: 
---
 tools/libxc/include/xenctrl.h | 16 +++---
 tools/libxc/xc_rt.c   | 68 +++
 2 files changed, 80 insertions(+), 4 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 01a6dda..9462271 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -888,11 +888,19 @@ int xc_sched_credit2_domain_get(xc_interface *xch,
struct xen_domctl_sched_credit2 *sdom);
 
 int xc_sched_rtds_domain_set(xc_interface *xch,
-uint32_t domid,
-struct xen_domctl_sched_rtds *sdom);
+   uint32_t domid,
+   struct xen_domctl_sched_rtds *sdom);
 int xc_sched_rtds_domain_get(xc_interface *xch,
-uint32_t domid,
-struct xen_domctl_sched_rtds *sdom);
+   uint32_t domid,
+   struct xen_domctl_sched_rtds *sdom);
+int xc_sched_rtds_vcpu_set(xc_interface *xch,
+   uint32_t domid,
+   struct xen_domctl_schedparam_vcpu *vcpus,
+   uint32_t num_vcpus);
+int xc_sched_rtds_vcpu_get(xc_interface *xch,
+   uint32_t domid,
+   struct xen_domctl_schedparam_vcpu *vcpus,
+   uint32_t num_vcpus);
 
 int
 xc_sched_arinc653_schedule_set(
diff --git a/tools/libxc/xc_rt.c b/tools/libxc/xc_rt.c
index d59e5ce..4be9624 100644
--- a/tools/libxc/xc_rt.c
+++ b/tools/libxc/xc_rt.c
@@ -62,3 +62,71 @@ int xc_sched_rtds_domain_get(xc_interface *xch,
 
 return rc;
 }
+
+int xc_sched_rtds_vcpu_set(xc_interface *xch,
+   uint32_t domid,
+   struct xen_domctl_schedparam_vcpu *vcpus,
+   uint32_t num_vcpus)
+{
+int rc = 0;
+unsigned processed = 0;
+DECLARE_DOMCTL;
+DECLARE_HYPERCALL_BOUNCE(vcpus, sizeof(*vcpus) * num_vcpus,
+XC_HYPERCALL_BUFFER_BOUNCE_IN);
+
+if ( xc_hypercall_bounce_pre(xch, vcpus) )
+return -1;
+
+domctl.cmd = XEN_DOMCTL_scheduler_op;
+domctl.domain = (domid_t) domid;
+domctl.u.scheduler_op.sched_id = XEN_SCHEDULER_RTDS;
+domctl.u.scheduler_op.cmd = XEN_DOMCTL_SCHEDOP_putvcpuinfo;
+
+while ( processed < num_vcpus )
+{
+domctl.u.scheduler_op.u.v.nr_vcpus = num_vcpus - processed;
+set_xen_guest_handle_offset(domctl.u.scheduler_op.u.v.vcpus, vcpus,
+processed);
+if ( (rc = do_domctl(xch, &domctl)) != 0 )
+break;
+processed += domctl.u.scheduler_op.u.v.nr_vcpus;
+}
+
+xc_hypercall_bounce_post(xch, vcpus);
+
+return rc;
+}
+
+int xc_sched_rtds_vcpu_get(xc_interface *xch,
+   uint32_t domid,
+   struct xen_domctl_schedparam_vcpu *vcpus,
+   uint32_t num_vcpus)
+{
+int rc;
+unsigned processed = 0;
+DECLARE_DOMCTL;
+DECLARE_HYPERCALL_BOUNCE(vcpus, sizeof(*vcpus) * num_vcpus,
+XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+
+if ( xc_hypercall_bounce_pre(xch, vcpus) )
+return -1;
+
+domctl.cmd = XEN_DOMCTL_scheduler_op;
+domctl.domain = (domid_t) domid;
+domctl.u.scheduler_op.sched_id = XEN_SCHEDULER_RTDS;
+domctl.u.scheduler_op.cmd = XEN_DOMCTL_SCHEDOP_getvcpuinfo;
+
+while ( processed < num_vcpus )
+{
+domctl.u.scheduler_op.u.v.nr_vcpus = num_vcpus - processed;
+set_xen_guest_handle_offset(domctl.u.scheduler_op.u.v.vcpus, vcpus,
+processed);
+if ( (rc = do_domctl(xch, &domctl)) != 0 )
+break;
+processed += domctl.u.scheduler_op.u.v.nr_vcpus;
+}
+
+xc_hypercall_bounce_post(xch, vcpus);
+
+return rc;
+}
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 for Xen 4.7 4/4] xl: enable per-VCPU parameter settings for RTDS scheduler

2016-03-06 Thread Chong Li
Change main_sched_rtds and related output functions to support
per-VCPU settings.

Signed-off-by: Chong Li 
Signed-off-by: Meng Xu 
Signed-off-by: Sisu Xi 

---
Changes on PATCH v5:
1) Add sched_vcpu_set_all() for the cases that all vcpus of a
domain need to be changed together.

Changes on PATCH v4:
1) Coding style changes

Changes on PATCH v3:
1) Support commands, e.g., "xl sched-rtds -d vm1" to output the
default scheduling parameters

Changes on PATCH v2:
1) Remove per-domain output functions for RTDS scheduler.

2) Users now use '-v all' to specify all VCPUs.

3) Support outputting a subset of the parameters of the VCPUs
of a specific domain.

4) When setting all VCPUs with the same parameters (by only one
command), no per-domain function is invoked.

CC: 
CC: 
CC: 
CC: 
CC: 
CC: 
---
 docs/man/xl.pod.1 |   4 +
 tools/libxl/xl_cmdimpl.c  | 292 --
 tools/libxl/xl_cmdtable.c |  10 +-
 3 files changed, 269 insertions(+), 37 deletions(-)

diff --git a/docs/man/xl.pod.1 b/docs/man/xl.pod.1
index 4279c7c..f9ff917 100644
--- a/docs/man/xl.pod.1
+++ b/docs/man/xl.pod.1
@@ -1051,6 +1051,10 @@ B
 Specify domain for which scheduler parameters are to be modified or retrieved.
 Mandatory for modifying scheduler parameters.
 
+=item B<-v VCPUID/all>, B<--vcpuid=VCPUID/all>
+
+Specify vcpu for which scheduler parameters are to be modified or retrieved.
+
 =item B<-p PERIOD>, B<--period=PERIOD>
 
 Period of time, in microseconds, over which to replenish the budget.
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 2b6371d..7d5620f 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -5823,6 +5823,52 @@ static int sched_domain_set(int domid, const 
libxl_domain_sched_params *scinfo)
 return 0;
 }
 
+static int sched_vcpu_get(libxl_scheduler sched, int domid,
+  libxl_vcpu_sched_params *scinfo)
+{
+int rc;
+
+rc = libxl_vcpu_sched_params_get(ctx, domid, scinfo);
+if (rc) {
+fprintf(stderr, "libxl_vcpu_sched_params_get failed.\n");
+exit(-1);
+}
+if (scinfo->sched != sched) {
+fprintf(stderr, "libxl_vcpu_sched_params_get returned %s not %s.\n",
+libxl_scheduler_to_string(scinfo->sched),
+libxl_scheduler_to_string(sched));
+return 1;
+}
+
+return 0;
+}
+
+static int sched_vcpu_set(int domid, const libxl_vcpu_sched_params *scinfo)
+{
+int rc;
+
+rc = libxl_vcpu_sched_params_set(ctx, domid, scinfo);
+if (rc) {
+fprintf(stderr, "libxl_vcpu_sched_params_set failed.\n");
+exit(-1);
+}
+
+return rc;
+}
+
+static int sched_vcpu_set_all(int domid, const libxl_vcpu_sched_params *scinfo)
+{
+int rc;
+
+rc = libxl_vcpu_sched_params_set_all(ctx, domid, scinfo);
+if (rc) {
+fprintf(stderr, "libxl_vcpu_sched_params_set failed.\n");
+exit(-1);
+}
+
+return rc;
+}
+
 static int sched_credit_params_set(int poolid, libxl_sched_credit_params 
*scinfo)
 {
 if (libxl_sched_credit_params_set(ctx, poolid, scinfo)) {
@@ -5942,6 +5988,38 @@ static int sched_rtds_domain_output(
 return 0;
 }
 
+static int sched_rtds_vcpu_output(
+  int domid, libxl_vcpu_sched_params *scinfo)
+{
+char *domname;
+int rc = 0;
+int i;
+
+if (domid < 0) {
+printf("%-33s %4s %4s %9s %9s\n", "Name", "ID",
+"VCPU", "Period", "Budget");
+return 0;
+}
+
+rc = sched_vcpu_get(LIBXL_SCHEDULER_RTDS, domid, scinfo);
+if (rc)
+goto out;
+
+domname = libxl_domid_to_name(ctx, domid);
+for( i = 0; i < scinfo->num_vcpus; i++ ) {
+printf("%-33s %4d %4d %9"PRIu32" %9"PRIu32"\n",
+domname,
+domid,
+scinfo->vcpus[i].vcpuid,
+scinfo->vcpus[i].period,
+scinfo->vcpus[i].budget);
+}
+free(domname);
+
+out:
+return rc;
+}
+
 static int sched_rtds_pool_output(uint32_t poolid)
 {
 char *poolname;
@@ -6015,6 +6093,65 @@ static int sched_domain_output(libxl_scheduler sched, 
int (*output)(int),
 return 0;
 }
 
+static int sched_vcpu_output(libxl_scheduler sched,
+  int (*output)(int, libxl_vcpu_sched_params 
*),
+  int (*pooloutput)(uint32_t), const char 
*cpupool)
+{
+libxl_dominfo *info;
+libxl_cpupoolinfo *poolinfo = NULL;
+uint32_t poolid;
+int nb_domain, n_pools = 0, i, p;
+int rc = 0;
+
+if (cpupool) {
+if (libxl_cpupool_qualifier_to_cpupoolid(ctx, cpupool, &poolid, NULL)
+|| !libxl_cpupoolid_is_valid(ctx, poolid)) {
+fprintf(stderr, "unknown cpupool \'%s\'\n", cpupool);
+return -ERROR_FAIL;
+}
+}
+
+info = libxl_list_domain(ctx, &nb_domain);
+if (!info) {
+fprintf(stderr, "libxl_list_domain failed.\n");
+return 1;
+}
+poo

[Xen-devel] [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler

2016-03-06 Thread Chong Li
Add XEN_DOMCTL_SCHEDOP_getvcpuinfo and _putvcpuinfo hypercalls
to independently get and set the scheduling parameters of each
vCPU of a domain

Signed-off-by: Chong Li 
Signed-off-by: Meng Xu 
Signed-off-by: Sisu Xi 

---
Changes on PATCH v5:
1) When processing XEN_DOMCTL_SCHEDOP_get/putvcpuinfo, we do
preemption check in a similar way to XEN_SYSCTL_pcitopoinfo

Changes on PATCH v4:
1) Add uint32_t vcpu_index to struct xen_domctl_scheduler_op.
When processing XEN_DOMCTL_SCHEDOP_get/putvcpuinfo, we call
hypercall_preemption_check in case the current hypercall lasts
too long. If we decide to preempt the current hypercall, we record
the index of the most-recent finished vcpu into the vcpu_index of
struct xen_domctl_scheduler_op. So when we resume the hypercall after
preemption, we start processing from the posion specified by vcpu_index,
and don't need to repeat the work that has already been done in the
hypercall before the preemption.
(This design is based on the do_grant_table_op() in grant_table.c)

2) Coding style changes

Changes on PATCH v3:
1) Remove struct xen_domctl_schedparam_t.

2) Change struct xen_domctl_scheduler_op.

3) Check if period/budget is within a validated range

Changes on PATCH v2:
1) Change struct xen_domctl_scheduler_op, for transferring per-vcpu parameters
between libxc and hypervisor.

2) Handler of XEN_DOMCTL_SCHEDOP_getinfo now just returns the default budget and
period values of RTDS scheduler.

3) Handler of XEN_DOMCTL_SCHEDOP_getvcpuinfo now can return a random subset of
the parameters of the VCPUs of a specific domain

CC: 
CC: 
CC: 
CC: 
CC: 
CC: 
---
 xen/common/sched_credit.c   |   4 ++
 xen/common/sched_credit2.c  |   4 ++
 xen/common/sched_rt.c   | 130 +++-
 xen/common/schedule.c   |  15 -
 xen/include/public/domctl.h |  59 
 5 files changed, 182 insertions(+), 30 deletions(-)

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index 0dce790..455c684 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -1054,6 +1054,10 @@ csched_dom_cntl(
  * lock. Runq lock not needed anywhere in here. */
 spin_lock_irqsave(&prv->lock, flags);
 
+if ( op->cmd == XEN_DOMCTL_SCHEDOP_putvcpuinfo ||
+ op->cmd == XEN_DOMCTL_SCHEDOP_getvcpuinfo )
+return -EINVAL;
+
 if ( op->cmd == XEN_DOMCTL_SCHEDOP_getinfo )
 {
 op->u.credit.weight = sdom->weight;
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 3c49ffa..c3049a0 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -1421,6 +1421,10 @@ csched2_dom_cntl(
  * runq lock to update csvcs. */
 spin_lock_irqsave(&prv->lock, flags);
 
+if ( op->cmd == XEN_DOMCTL_SCHEDOP_putvcpuinfo ||
+ op->cmd == XEN_DOMCTL_SCHEDOP_getvcpuinfo )
+return -EINVAL;
+
 if ( op->cmd == XEN_DOMCTL_SCHEDOP_getinfo )
 {
 op->u.credit2.weight = sdom->weight;
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 3f1d047..4fcbf40 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -86,6 +86,22 @@
 #define RTDS_DEFAULT_PERIOD (MICROSECS(1))
 #define RTDS_DEFAULT_BUDGET (MICROSECS(4000))
 
+/*
+ * Max period: max delta of time type, because period is added to the time
+ * a vcpu activates, so this must not overflow.
+ * Min period: 10 us, considering the scheduling overhead (when period is
+ * too low, scheduling is invoked too frequently, causing high overhead).
+ */
+#define RTDS_MAX_PERIOD (STIME_DELTA_MAX)
+#define RTDS_MIN_PERIOD (MICROSECS(10))
+
+/*
+ * Min budget: 10 us, considering the scheduling overhead (when budget is
+ * consumed too fast, scheduling is invoked too frequently, causing
+ * high overhead).
+ */
+#define RTDS_MIN_BUDGET (MICROSECS(10))
+
 #define UPDATE_LIMIT_SHIFT  10
 #define MAX_SCHEDULE(MILLISECS(1))
 /*
@@ -1130,23 +1146,17 @@ rt_dom_cntl(
 unsigned long flags;
 int rc = 0;
 
+xen_domctl_schedparam_vcpu_t local_sched;
+s_time_t period, budget;
+uint32_t index = 0;
+
 switch ( op->cmd )
 {
-case XEN_DOMCTL_SCHEDOP_getinfo:
-if ( d->max_vcpus > 0 )
-{
-spin_lock_irqsave(&prv->lock, flags);
-svc = rt_vcpu(d->vcpu[0]);
-op->u.rtds.period = svc->period / MICROSECS(1);
-op->u.rtds.budget = svc->budget / MICROSECS(1);
-spin_unlock_irqrestore(&prv->lock, flags);
-}
-else
-{
-/* If we don't have vcpus yet, let's just return the defaults. */
-op->u.rtds.period = RTDS_DEFAULT_PERIOD;
-op->u.rtds.budget = RTDS_DEFAULT_BUDGET;
-}
+case XEN_DOMCTL_SCHEDOP_getinfo: /* return the default parameters */
+spin_lock_irqsave(&prv->lock, flags);
+op->u.rtds.period = RTDS_DEFAULT_PERIOD / MICROSECS(1);
+op->u.rtds.budget = RTDS_DEFAULT_BUDGET / MICROSECS(1);
+   

[Xen-devel] [qemu-mainline baseline-only test] 44226: tolerable trouble: broken/fail/pass

2016-03-06 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 44226 qemu-mainline real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/44226/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-qemuu-nested-intel 14 capture-logs/l1(14)   broken like 44210
 test-amd64-amd64-xl  19 guest-start/debian.repeatfail   like 44210
 test-amd64-amd64-qemuu-nested-intel 13 xen-boot/l1 fail like 44210
 test-amd64-amd64-amd64-pvgrub 10 guest-start  fail  like 44210

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-midway   13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-midway   12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass

version targeted for testing:
 qemuu3c0f12df65da872d5fbccae469f2cb21ed1c03b7
baseline version:
 qemuu9c279bec754a84c790b70674a5a224379c8dcda2

Last test of basis44210  2016-03-02 15:57:51 Z4 days
Testing same since44226  2016-03-06 07:25:59 Z0 days1 attempts


People who touched revisions under test:
  Alex Bennée 
  Amit Shah 
  Andrew Baumann 
  Christian Borntraeger 
  Denis V. Lunev 
  Greg Kurz 
  Hollis Blanchard 
  Ladi Prosek 
  Lluís Vilanova 
  Paolo Bonzini 
  Peter Crosthwaite 
  Peter Crosthwaite 
  Peter Crosthwaite 
  Peter Maydell 
  Ralf-Philipp Weinmann 
  Richard Henderson 
  Stefan Hajnoczi 
  Thomas Huth 
  Wei Huang 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd6

[Xen-devel] [linux-3.18 test] 85493: tolerable FAIL - PUSHED

2016-03-06 Thread osstest service owner
flight 85493 linux-3.18 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/85493/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeatfail   like 82793
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 82793
 build-amd64-rumpuserxen   6 xen-buildfail   like 82928
 build-i386-rumpuserxen6 xen-buildfail   like 82928
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 82928
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 82928

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvh-intel 14 guest-saverestorefail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass

version targeted for testing:
 linux0f67c5beb42a8328e9e661dcfcc4d328b6138264
baseline version:
 linux2c07053b8e1e0c22bb54dfbdf8e86a70f8bf00fc

Last test of basis82928  2016-02-17 02:03:58 Z   18 days
Testing same since85493  2016-03-05 20:25:50 Z0 days1 attempts


People who touched revisions under test:
  Alex Deucher 
  Alexander Gordeev 
  Alexandra Yates 
  Alexei Potashnik 
  Andrey Konovalov 
  Andy Shevchenko 
  Anton Protopopov 
  Arnd Bergmann 
  Axel Lin 
  Bard Liao 
  Bart Van Assche 
  Benjamin Herrenschmidt 
  Bjorn Helgaas 
  Bruno Prémont 
  Chad Dupuis 
  Chris Mason 
  Christoph Hellwig 
  Clemens Ladisch 
  CQ Tang 
  Dan Carpenter 
  Darren Hart 
  Dave Airlie 
  David Henningsson 
  David Sterba 
  David Woodhouse 
  Dmitry Monakhov 
  Dmitry Vyukov 
  Eryu Guan 
  Ewan D. Milne 
  Filipe Manana 
  Gavin Shan 
  Gerd Hoffmann 
  Greg Kroah-Hartman 
  Hannes Reinecke 
  Herbert Xu 
  Herton R. Krzesinski 
  Himanshu Madhani 
  Holger Hoffstätte 
  Insu Yun 
  James Bottomley 
  James Hogan 
  James Morris 
  Jan Kara 
  Jani Nikula 
  Jeremy McNicoll 
  Kishon Vijay Abraham I 
  Liam Girdwood 
  Linus Torvalds 
  Linus Walleij 
  Mans Rullgard 
  Mark Brown 
  Martin K. Petersen 
  Martin Schwidefsky 
  Mathias Krause 
  Mi

[Xen-devel] [linux-linus test] 85509: regressions - FAIL

2016-03-06 Thread osstest service owner
flight 85509 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/85509/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-rumpuserxen6 xen-build fail REGR. vs. 59254
 build-amd64-rumpuserxen   6 xen-build fail REGR. vs. 59254
 test-amd64-amd64-xl  15 guest-localmigratefail REGR. vs. 59254
 test-amd64-amd64-xl-credit2  15 guest-localmigratefail REGR. vs. 59254
 test-amd64-i386-xl   15 guest-localmigratefail REGR. vs. 59254
 test-amd64-amd64-xl-xsm  14 guest-saverestore fail REGR. vs. 59254
 test-amd64-i386-xl-xsm   15 guest-localmigratefail REGR. vs. 59254
 test-amd64-amd64-xl-multivcpu 15 guest-localmigrate   fail REGR. vs. 59254
 test-amd64-amd64-pair  22 guest-migrate/dst_host/src_host fail REGR. vs. 59254
 test-armhf-armhf-xl  15 guest-start/debian.repeat fail REGR. vs. 59254
 test-armhf-armhf-xl-cubietruck 15 guest-start/debian.repeat fail REGR. vs. 
59254
 test-armhf-armhf-xl-xsm  11 guest-start   fail REGR. vs. 59254
 test-armhf-armhf-xl-multivcpu 15 guest-start/debian.repeat fail REGR. vs. 59254
 test-armhf-armhf-xl-credit2  15 guest-start/debian.repeat fail REGR. vs. 59254
 test-amd64-i386-pair   22 guest-migrate/dst_host/src_host fail REGR. vs. 59254

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds 15 guest-localmigratefail REGR. vs. 59254
 test-armhf-armhf-xl-rtds 11 guest-start   fail REGR. vs. 59254
 test-amd64-i386-libvirt-pair 22 guest-migrate/dst_host/src_host fail baseline 
untested
 test-amd64-amd64-libvirt-pair 22 guest-migrate/dst_host/src_host fail baseline 
untested
 test-armhf-armhf-xl-vhd   9 debian-di-install   fail baseline untested
 test-amd64-i386-libvirt-xsm  14 guest-saverestorefail blocked in 59254
 test-amd64-amd64-libvirt 15 guest-saverestore.2  fail blocked in 59254
 test-amd64-amd64-libvirt-xsm 15 guest-saverestore.2  fail blocked in 59254
 test-amd64-i386-libvirt  15 guest-saverestore.2  fail blocked in 59254
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 59254
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 59254
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 59254
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 59254

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-intel 14 guest-saverestorefail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-amd64-amd64-qemuu-nested-intel 13 xen-boot/l1 fail never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 13 xen-boot/l1   fail never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass

Re: [Xen-devel] Patching error while setting up COLO

2016-03-06 Thread Wen Congyang
On 03/05/2016 09:51 AM, Yu-An(Victor) Chen wrote:
> Hi Congyang,
> 
> Thanks for your reply,
> 
> even with your script, and I modify the "path_to_xen_source" to point where 
> my xen directory is. I still got this error.
> 
> ERROR: User requested feature xen
>configure was not able to find it.
>Install xen devel
> 
> What do you think what I am missing? Thank you!

Do you build xen before?

Thanks
Wen Congyang

> 
> Victor
> 
> 
> 
> On Thu, Mar 3, 2016 at 6:15 PM, Wen Congyang  > wrote:
> 
> On 03/04/2016 10:01 AM, Yu-An(Victor) Chen wrote:
> > Hi,
> >
> > So I git clone 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_wencongyang_qemu-2Dxen.git&d=CwICaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=IitX1U91-NhsQt0q4MJOLQ&m=4j1T2HKL4uKodf62b4Tz1XtOvX81uAqCqfOcD90CRAY&s=s0fo5ej8_vZ1PmOkDCuyIroS5Zi_KpDSHI8jqodSmrg&e=
> >
> > but i only see branch "con-xen-v2" instead of " colo-xen-v2" so I 
> assume I use just use con-xen-v2.
> >
> > But then the following step:
> >
> > in both ~/qemu-colo and ~/qemu-xen
> >
> > ./configure --enable-xen --target-list=x86_64-softmmu 
> --extra-cflags="-I$path_to_xen_source/tools/include 
> -I$path_to_xen_source/tools/libxc -I$path_to_xen_source/tools/xenstore" 
> --extra-ldflags="-L$path_to_xen_source/tools/libxc 
> -L$path_to_xen_source/tools/xenstore"
> 
> 
> This command line is out of dated. The following is my building scripts:
> #! /bin/bash
> 
> path_to_xen_source=/work/src/xen
> #./configure --enable-xen --target-list=i386-softmmu \
> #--extra-cflags="-I$path_to_xen_source/tools/include 
> -I$path_to_xen_source/tools/libxc/include 
> -I$path_to_xen_source/tools/xenstore/include" \
> #--extra-ldflags="-L$path_to_xen_source/tools/libxc 
> -L$path_to_xen_source/tools/xenstore"
> 
> extra_cflags=""
> extra_cflags+=" -DXC_WANT_COMPAT_EVTCHN_API=1"
> extra_cflags+=" -DXC_WANT_COMPAT_GNTTAB_API=1"
> extra_cflags+=" -DXC_WANT_COMPAT_MAP_FOREIGN_API=1"
> extra_cflags+=" -I$path_to_xen_source/tools/include"
> extra_cflags+=" -I$path_to_xen_source/tools/libs/toollog/include"
> extra_cflags+=" -I$path_to_xen_source/tools/libs/evtchn/include"
> extra_cflags+=" -I$path_to_xen_source/tools/libs/gnttab/include"
> extra_cflags+=" -I$path_to_xen_source/tools/libs/foreignmemory/include"
> extra_cflags+=" -I$path_to_xen_source/tools/libxc/include"
> extra_cflags+=" -I$path_to_xen_source/tools/xenstore/include"
> extra_cflags+=" -I$path_to_xen_source/tools/xenstore/compat/include"
> extra_cflags+=" "
> 
> extra_ldflags=""
> extra_ldflags+=" -L$path_to_xen_source/tools/libxc"
> extra_ldflags+=" -L$path_to_xen_source/tools/xenstore"
> extra_ldflags+=" -L$path_to_xen_source/tools/libs/evtchn"
> extra_ldflags+=" -L$path_to_xen_source/tools/libs/gnttab"
> extra_ldflags+=" -L$path_to_xen_source/tools/libs/foreignmemory"
> extra_ldflags+=" -Wl,-rpath-link=$path_to_xen_source/tools/libs/toollog"
> extra_ldflags+=" -Wl,-rpath-link=$path_to_xen_source/tools/libs/evtchn"
> extra_ldflags+=" -Wl,-rpath-link=$path_to_xen_source/tools/libs/gnttab"
> extra_ldflags+=" -Wl,-rpath-link=$path_to_xen_source/tools/libs/call"
> extra_ldflags+=" 
> -Wl,-rpath-link=$path_to_xen_source/tools/libs/foreignmemory"
> extra_ldflags+=" "
> 
> ./configure --enable-xen --target-list=i386-softmmu \
> --extra-cflags="$extra_cflags" \
> --extra-ldflags="$extra_ldflags"
> 
> if [[ $? -ne 0 ]]; then
> exit 1
> fi
> 
> #make -j8 && make clean
> make -j8
> 
> You can find the newest building way in tools/Makefile(xen's codes):
> subdir-all-qemu-xen-dir: qemu-xen-dir-find
> if test -d $(QEMU_UPSTREAM_LOC) ; then \
> source=$(QEMU_UPSTREAM_LOC); \
> else \
> source=.; \
> fi; \
> cd qemu-xen-dir; \
> if $$source/scripts/tracetool.py --check-backend --backend stderr 
> ; then \
> enable_trace_backend='--enable-trace-backend=stderr'; \
> else \
> enable_trace_backend='' ; \
> fi ; \
> $$source/configure --enable-xen --target-list=i386-softmmu \
> $(QEMU_XEN_ENABLE_DEBUG) \
> $$enable_trace_backend \
> --prefix=$(LIBEXEC) \
> --libdir=$(LIBEXEC_LIB) \
> --includedir=$(LIBEXEC_INC) \
> 
> 
> Thanks
> Wen Congyang
> 
> >
> >
> > I got the following error message:
> >
> > "ERROR: User requested feature xen
> >configure was not able to find it.
> >Install xen devel"
> >
> > I found out the the error came from just simply doing this:
> >
> > ./configure --enable-xen
>

[Xen-devel] [xen-4.5-testing test] 85519: tolerable FAIL - PUSHED

2016-03-06 Thread osstest service owner
flight 85519 xen-4.5-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/85519/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pygrub   9 debian-di-install  fail in 85360 pass in 85519
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-localmigrate/x10 fail pass in 
85360

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeat fail in 85360 like 83003
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail in 85360 like 83135
 test-amd64-amd64-xl-rtds  6 xen-boot fail   like 83135
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 83135
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 83135
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 83135
 test-armhf-armhf-xl-rtds 11 guest-start  fail   like 83135

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 13 saverestore-support-check fail in 85360 never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-check fail in 85360 never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  10 guest-start  fail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 10 guest-start  fail never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-libvirt-raw 10 guest-start  fail   never pass

version targeted for testing:
 xen  d165c490224da17c5dcaa2964fbcf59cd7dedc56
baseline version:
 xen  fe71162ab965d4a3344bb867f88e967806c80af5

Last test of basis83135  2016-02-19 06:43:29 Z   16 days
Failing since 84927  2016-03-01 13:45:33 Z5 days5 attempts
Testing same since85360  2016-03-04 18:51:43 Z2 days2 attempts


People who touched revisions under test:
  Andrew Cooper 
  Ian Campbell 
  Ian Jackson 
  Jan Beulich 
  Julien Grall 
  Tim Deegan 
  Wei Liu 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-prev pass
 build-i386-prev  pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  pass
 build-i386-rumpuserxen   pass
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-amd64-qemuu-nested-amdfail
 test-amd64-amd64-xl-pvh-amd  fail
 test-amd64-i386-qemut-rhel6hvm-amd   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64pass
 test-amd64-i386-xl-qemut-debianhvm-amd64   

Re: [Xen-devel] [PATCH v11 20/27] Support colo mode for qemu disk

2016-03-06 Thread Wen Congyang
On 03/05/2016 01:44 AM, Ian Jackson wrote:
> Changlong Xie writes ("[PATCH v11 20/27] Support colo mode for qemu disk"):
>> From: Wen Congyang 
>>
>> Usage: disk = 
>> ['...,colo,colo-host=xxx,colo-port=xxx,colo-export=xxx,active-disk=xxx,hidden-disk=xxx...']
>> For QEMU block replication details:
>> http://wiki.qemu.org/Features/BlockReplication
> 
> So now I am slightly confused by the design, I think.
> 
> When you replicate a VM with COLO using xl, its memory state is
> transferred over ssh.  But its disk replication is done unencrypted
> and unauthenticated ?

Yes, it is a problem. I will think how to improve it.

> 
> And the disk replication is, out of band, and needs to be configured
> separately ?  This is rather awkward, although maybe not a
> showstopper.  (Maybe we can have a plan to fix it in the future...)

colo-host,colo-port should be the global configuration. And colo-export,
active-disk,hidden-disk must be configured separately, because each
disk should have a different configuration.

> 
> And, how does the disk replication, which doesn't depend on there
> being xl running, relate to the vm state replication, which does ?  I
> think at the very least I'd like to see some information about the
> principles of operation - either explained, or referred to, in the
> user manual.

OK. The disk replication doesn't depend on xl. We only can operate it
via qemu monitor command:
1. stop the vm
2. do the checkpoint
3. start the vm
1/3 is suspend/resume the guest. We only need to do 2 when both vm are
in the consistent state.

> 
> Is it possible to use COLO with an existing full-service disk
> replication service such as DRBD ?

DRBD doesn's support the case like COLO. Because both primary guest
and secondary guest need to write to the disk.

> 
>> +(a) An example for COLO replication's configuration: disk 
>> =['...,colo,colo-host
>> +=xxx,colo-port=xxx,colo-export=xxx,active-disk=xxx,hidden-disk=xxx...']
>> +
>> +=item B  :Secondary host's ip address.
>> +
>> +=item B  :Secondary host's port, we will run a nbd server on
>> +secondary host, and the nbd server will listen this port.
>> +
>> +=item B:Nbd server's disk export name of secondary host.
>> +
>> +=item B:Secondary's guest write will be buffered in this 
>> disk,
>> +and it's used by secondary.
>> +
>> +=item B:Primary's modified contents will be buffered in 
>> this
>> +disk, and it's used by secondary.
> 
> What would a typical configuration look like ?  I don't understand the
> relationship between active-disk and hidden-disk, etc.

QEMU has a feature: backing file
For example: A's backing file is B
1. If we read from A, but the sector is not allocated in A. We wil return a zero
   sector to the guest. If A has a backing file, we will read the sector from B
   instead of returning a zero sector.
2. The backing file doesn't affect the write operation.

QEMU has another feature: backup block job
Backup job has two file: one is source and another is the target. It has some 
running
mode. For block replication, we use the mode "sync=none". In this mode, we will 
read
the data from the source disk before we modify it, and write it to the target 
disk.
We keep a bitmap to remeber which sector is backuped from the source disk to the
target disk. If the target disk is an empty disk, and empty disk's backing file 
is
the source disk, we can read from the target disk to get the source disk's 
originnal data.


How does block replication work:
A. primary qemu:
1. use the block driver quorum: it will read from all children and write to all 
children.
   child 0: real disk
   child 1: nbd client
   reading from child 1 will fail, but we use the fifo mode. In this mode, we 
read from
   child 0 will success and we don't read from child 0
   write to child 1: because child 1 is nbd client, it will forward the write 
request to
   nbd server

B. secondary qemu:
We have 3 disks: active disk(called it A), hidden disk(called it H), and 
secondary disk
(real disk, called it S).
A's backing file is H, and H's backing file is S.
We also start a backup job: the source disk is S, and the target disk is H.
we run nbd server in secondary qemu. And the nbd server will write to S.

Before resuming both primary vm and secondary vm: the state is:
1. primary disk and secondary disk are in the consistent state(contain the same 
data)
2. active disk and hidden disk are the empty disk
When the guest is running:
1. NBD server receives the primary write operation and writes the data to S
2. Before we write data to S, the backup job will read the original data and 
backup it
   to H
3. The secondary vm will write data to A.
4. If secondary vm will read data from A:
   I. If the sector is allocated in A, read it from A.
  II. Otherwise, the secondary vm doesn't modify this sector after the latest 
is resumed.
 III. In this case, we read it from H. We can read S's original data from H(See 
the explanation
  In backup job).

If we have more than 1 rea

Re: [Xen-devel] [PATCH v11 20/27] Support colo mode for qemu disk

2016-03-06 Thread Wen Congyang
On 03/05/2016 04:30 AM, Konrad Rzeszutek Wilk wrote:
> On Fri, Mar 04, 2016 at 05:52:09PM +, Ian Jackson wrote:
>> Changlong Xie writes ("[PATCH v11 20/27] Support colo mode for qemu disk"):
>>> +Enable COLO HA for disk. For better understanding block replication on
>>> +QEMU, please refer to:
>>> +http://wiki.qemu.org/Features/BlockReplication
>>
>> Sorry, I missed this link on my first pass.  I still think that at the
>> very least this needs something more user-facing (ie, how should one
>> set this up).
>>
>> But, I'm kind of worried that qemu is the wrong place to be doing
>> this.
>>
>> How can this be made to work with PV guests ?
> 
> QEMU can also serve PV guests (qdisk).
> 
> I think your question is more of - what about making this work with
> PV block backend?

I don't know how to work with PV block backend. It is one reason that
why we only support pure HVM now.
For PV block backend, there is also other problem. For exampe resuming
it in the secondary side is very slow, because we need to disconnect and
reconnect.

Thanks
Wen Congyang

>>
>> What if an HVM guest has PV-on-HVM drivers ?  In this case there might
>> be two relevant qemus, one for the qdisk Xen PV block backend, and one
>> for the emulated IDE.
> 
> In both cases QEMU would use the same underlaying API to actually write/read
> out the blocks. That API would then use NBD, etc to replicate writes.
> 
> Maybe a little ASCII art?
> 
>   qdisk  ide
> \/
>\  /
>block API
> |
>QCOW2
> |
>NBD
> 
> Or such?
> 
>>
>> I don't understand how discrepant writes are detected.  Surely they
>> might occur and should trigger a resynch ?
>>
>> Ian.
> 
> 
> .
> 




___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v11 10/27] tools/libxl: add back channel support to write stream

2016-03-06 Thread Wen Congyang
On 03/05/2016 01:00 AM, Ian Jackson wrote:
> Changlong Xie writes ("[PATCH v11 10/27] tools/libxl: add back channel 
> support to write stream"):
>> From: Wen Congyang 
>>
>> Add back channel support to write stream. If the write stream is
>> a back channel stream, this means the write stream is used by
>> Secondary to send some records back.
> 
> The general idea seems fine to me but I want an opinion from Andrew.
> 
> If I'm not mistaken there is no call site for this yet.  In which case
> this should be mentioned in the commit message.
> 
>> +/*- checkpoint state -*/
>> +void libxl__stream_write_checkpoint_state(libxl__egc *egc,
>> +  libxl__stream_write_state *stream,
>> +  libxl_sr_checkpoint_state *srcs)
> 
> Firstly, missing blank line.
> 
> Secondly, reading all this leads me to think that maybe the
> `checkpoint_state' record should be called something different.  Is it
> only ever going to be used for COLO ?  Maybe it should be

Yes, it is only used for COLO now.

> `COLOHA_STATE' or something (and all the functions etc. renamed
> consequently) ?
> 
> What do you think ?

COLO is FT, not HA. What aboyt COLOFT_STATE?

Thanks
Wen Congyang

> 
> Thanks,
> Ian.
> 
> 
> .
> 




___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v11 12/27] tools/libx{l, c}: introduce wait_checkpoint callback

2016-03-06 Thread Wen Congyang
On 03/05/2016 04:23 AM, Konrad Rzeszutek Wilk wrote:
> On Fri, Mar 04, 2016 at 05:03:16PM +, Ian Jackson wrote:
>> Changlong Xie writes ("[PATCH v11 12/27] tools/libx{l,c}: introduce 
>> wait_checkpoint callback"):
>>> From: Wen Congyang 
>>>
>>> Under COLO, we are doing checkpoint on demand, if this
>>> callback returns 1, we will take another checkpoint.
>>> 0 indicates unexpected error.
>>
>> This doesn't seem to have a corresponding implementation.  I think the
>> implementation ought to be in the same patch.
>>
>> If 0 is always an `unexpected error', perhaps the return value should
>> be an error code or something ?  I'm not sure.
> 
> I struggled with this API.
> 
> I like the idea of that negative value would imply 'unexpected error'.
> And 1 for 'OK, take another checkpoint'. Not sure if zero would be a valid
> return value..

IIRC, save/restore callback always use 0 for unexpected error, 1 for OK.
negative value for pipe is broken.

Thanks
Wen Congyang

> 
> 
>>
>> Ian.
> 
> 
> .
> 




___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [linux-3.18 baseline-only test] 44227: regressions - FAIL

2016-03-06 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 44227 linux-3.18 real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/44227/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu 15 guest-start/debian.repeat fail REGR. vs. 44186

Regressions which are regarded as allowable (not blocking):
 build-i386-rumpuserxen6 xen-buildfail   like 44186
 build-amd64-rumpuserxen   6 xen-buildfail   like 44186
 test-amd64-amd64-xl-credit2  19 guest-start/debian.repeatfail   like 44186

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-amd64-xl-pvh-intel 14 guest-saverestorefail  never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-midway   13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-midway   12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail never pass

version targeted for testing:
 linux0f67c5beb42a8328e9e661dcfcc4d328b6138264
baseline version:
 linux2c07053b8e1e0c22bb54dfbdf8e86a70f8bf00fc

Last test of basis44186  2016-02-26 15:55:01 Z9 days
Testing same since44227  2016-03-06 18:52:12 Z0 days1 attempts


People who touched revisions under test:
  Alex Deucher 
  Alexander Gordeev 
  Alexandra Yates 
  Alexei Potashnik 
  Andrey Konovalov 
  Andy Shevchenko 
  Anton Protopopov 
  Arnd Bergmann 
  Axel Lin 
  Bard Liao 
  Bart Van Assche 
  Benjamin Herrenschmidt 
  Bjorn Helgaas 
  Bruno Prémont 
  Chad Dupuis 
  Chris Mason 
  Christoph Hellwig 
  Clemens Ladisch 
  CQ Tang 
  Dan Carpenter 
  Darren Hart 
  Dave Airlie 
  David Henningsson 
  David Sterba 
  David Woodhouse 
  Dmitry Monakhov 
  Dmitry Vyukov 
  Eryu Guan 
  Ewan D. Milne 
  Filipe Manana 
  Gavin Shan 
  Gerd Hoffmann 
  Greg Kroah-Hartman 
  Hannes Reinecke 
  Herbert Xu 
  Herton R. Krzesinski 
  Himanshu Madhani 
  Holger Hoffstätte 
  Insu Yun 
  James Bottomley 
  James Hogan 
  James Morris 
  Jan Kara 
  Jani Nikula 
  Jeremy McNicoll 
  Kishon Vijay Abraham I 
  Liam Girdwood 
  Linus 

Re: [Xen-devel] [PATCH v11 14/27] secondary vm suspend/resume/checkpoint code

2016-03-06 Thread Wen Congyang
On 03/05/2016 01:11 AM, Ian Jackson wrote:
> Changlong Xie writes ("[PATCH v11 14/27] secondary vm 
> suspend/resume/checkpoint code"):
>> From: Wen Congyang 
>>
>> Secondary vm is running in colo mode. So we will do
>> the following things again and again:
> 
> I don't propose to review this in detail.  Skimreading it, it looks
> plausible.  I don't think a detailed review is needed.
> 
> I will review the changes to the core code.
> 
>> diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
>> index 5d9f497..2bfed64 100644
>> --- a/tools/libxc/xc_sr_common.h
>> +++ b/tools/libxc/xc_sr_common.h
>> @@ -184,10 +184,12 @@ struct xc_sr_context
>>   * migration stream
>>   * 0: Plain VM
>>   * 1: Remus
>> + * 2: COLO
>>   */
>>  enum {
>>  MIG_STREAM_NONE, /* plain stream */
>>  MIG_STREAM_REMUS,
>> +MIG_STREAM_COLO,
> 
> I think this shows that the duplicated list (in the comment, above the
> enum) is a mistake.  I would prefer it to be removed.

Do you mean remove the comments?

> 
>> +/* = colo: common functions = */

Add a blank line here? Will fix it in the next version.

>> +static void colo_enable_logdirty(libxl__colo_restore_state *crs, libxl__egc 
>> *egc)
> 
> Here's another missing blank line.  This seems to be a general theme:
> can you change this everywhere ?  Thanks.
> 
>> @@ -994,6 +1011,8 @@ static void domcreate_bootloader_done(libxl__egc *egc,
>>  const int restore_fd = dcs->restore_fd;
>>  libxl__domain_build_state *const state = &dcs->build_state;
>>  const int checkpointed_stream = dcs->restore_params.checkpointed_stream;
>> +libxl__colo_restore_state *const crs = &dcs->crs;
>> +libxl_domain_build_info *const info = &d_config->b_info;
>>  
>>  if (rc) {
>>  domcreate_rebuild_done(egc, dcs, rc);
>> @@ -1022,6 +1041,13 @@ static void domcreate_bootloader_done(libxl__egc *egc,
>>  
>>  /* Restore */
>>  
>> +/* COLO only supports HVM now */
>> +if (info->type != LIBXL_DOMAIN_TYPE_HVM &&
>> +checkpointed_stream == LIBXL_CHECKPOINTED_STREAM_COLO) {
>> +rc = ERROR_FAIL;
>> +goto out;
> 
> Please log something here, or it may be very mysterious.

OK. Will add some comments to explain why only support pure HVM now.

> 
>> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
>> index 6307b71..48b4858 100644
>> --- a/tools/libxl/libxl_internal.h
>> +++ b/tools/libxl/libxl_internal.h
>> @@ -87,6 +87,8 @@
>> @@ -3468,7 +3464,6 @@ libxl__stream_read_inuse(const 
>> libxl__stream_read_state *stream)
>>  return stream->running;
>>  }
>>  
>> -
>>  struct libxl__domain_create_state {
>>  /* filled in by user */
>>  libxl__ao *ao;
>> @@ -3484,6 +3479,8 @@ struct libxl__domain_create_state {
> 
> Unintentional whitespace change.

Sorry for the mistake. Will fix it in the next version.

Thanks
Wen Congyang

> 
> 
> Ian.
> 
> 
> .
> 




___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v11 15/27] primary vm suspend/resume/checkpoint code

2016-03-06 Thread Wen Congyang
On 03/05/2016 01:14 AM, Ian Jackson wrote:
> Changlong Xie writes ("[PATCH v11 15/27] primary vm suspend/resume/checkpoint 
> code"):
>> From: Wen Congyang 
> 
> I would look at this on the same basis as the previous patch.
> 
>> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
>> index 48b4858..5160939 100644
>> --- a/tools/libxl/libxl_internal.h
>> +++ b/tools/libxl/libxl_internal.h
> 
>> +struct libxl__stream_read_state {
>> +/* filled by the user */
>> +libxl__ao *ao;
>> +libxl__domain_create_state *dcs;
>> +int fd;
>> +bool legacy;
> 
> Can you please split out this code motion into a separate patch ?
> As it is it is very difficult to review.

OK, will fix it in the next version.

Thanks
Wen Congyang

> 
> Thanks,
> Ian.
> 
> 
> .
> 




___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v11 17/27] libxc/save: support COLO save

2016-03-06 Thread Wen Congyang
On 03/05/2016 01:18 AM, Ian Jackson wrote:
> Changlong Xie writes ("[PATCH v11 17/27] libxc/save: support COLO save"):
>> From: Wen Congyang 
>>
>> After suspend primary vm, get dirty bitmap on secondary vm,
>> and send pages both dirty on primary/secondary to secondary.
> 
> This patch again seems like a plausible kind of thing.  Again, I'd
> like to hear from Andrew.
> 
>> +static int merge_secondary_dirty_bitmap(struct xc_sr_context *ctx)
>> +{
> 
> This function might want the word `colo' in its name somewhere.

OK, will fix it in the next version.

Thanks
Wen Congyang

> 
> Thanks,
> Ian.
> 
> 
> .
> 




___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v11 18/27] implement the cmdline for COLO

2016-03-06 Thread Wen Congyang
On 03/05/2016 01:22 AM, Ian Jackson wrote:
>> --- a/docs/man/xl.pod.1
>> +++ b/docs/man/xl.pod.1
> ...
>> + COLO support in xl is still in experimental (proof-of-concept) phase.
>> + There is no support for network or disk at the moment.
> 
> I think you need to spell out the lack of storage and network handling
> means that the guest will corrupt its disk and confuse its network
> peers.

OK, will fix it in the next version.

> 
>> @@ -875,7 +890,10 @@ int libxl_domain_remus_start(libxl_ctx *ctx, 
>> libxl_domain_remus_info *info,
>>  dss->live = 1;
>>  dss->debug = 0;
>>  dss->remus = info;
>> -dss->checkpointed_stream = LIBXL_CHECKPOINTED_STREAM_REMUS;
>> +if (libxl_defbool_val(info->colo))
>> +dss->checkpointed_stream = LIBXL_CHECKPOINTED_STREAM_COLO;
>> +else
>> +dss->checkpointed_stream = LIBXL_CHECKPOINTED_STREAM_REMUS;
> 
> If you prefer, a ? : expression would do here as well.
> 
>  +dss->checkpointed_stream =
>libxl_defbool_val() ? LIBXL_CHECKPOINTED_STREAM_COLO :...

If so, this line will contains more than 80 characters. So I will not
change it.

Thanks
Wen Congyang

> 
> (only completed with sensible formatting).  Up to you - it's fine as
> it is, too.
> 
> Most of this patch looks good to me.
> 
> Ian.
> 
> 
> .
> 




___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Xentrace on Xilinx ARM

2016-03-06 Thread Ben Sanda
Dario,

Thank you very much for the help. I apologize for the HTML output on
the first email, I thought I had outlook set to send it in plain text.
My mistake. 

> Well, in this other thread, Paul (Cc-ed) says he basically has tracing 
> working on ARM:

> http://lists.xenproject.org/archives/html/xen-devel/2016-02/msg03373.html

I hadn't found that thread by Paul thank you for pointing it out. I would be 
eager to
see what additional changes he had to make to get it actually working.
It sounds like we headed down the same path, but there's more needed
that I'm unaware of.

Paul, I would be eager to see what changes you had to make to get
xentrace working on ARM and compare that against what I've tried. If
we could push up a formal patch that would be excellent.

Thanks,
Ben

-Original Message-
From: Dario Faggioli [mailto:dario.faggi...@citrix.com] 
Sent: 05 March, 2016 10:43
To: Ben Sanda ; xen-devel@lists.xen.org
Cc: Paul Sujkov 
Subject: Re: [Xen-devel] Xentrace on Xilinx ARM

On Fri, 2016-03-04 at 20:53 +, Ben Sanda wrote:
> Hello,
>  
Hello,

first of all, please, use plain text instead of HTML for emails to this list.

> My name is Ben Sanda, I’m a kernel/firmware developer with DornerWorks 
> engineering. Our team is working on support for Xen on the new Xilinx
> Ultrascale+ MPSoC platforms (ARM A53 core) and I’ve specifically been
> tasked
> with characterizing performance, particularly that of the schedulers.
> I wanted
> to make use of the xentrace tool to help give us some timing and 
> performance benchmarks, but searching over the Xen mailing lists it 
> appears xentrace has not yet been ported to ARM.
>
No, tracing support for ARM is not present upstream
 
> In searching for existing topics on this my main reference thread for 
> this has been the “[Xen-devel] xentrace, arm, hvm” email chain started 
> by Pavlo Suikov
> here: http://xen.markmail.org/thread/zochggqxcifs5cdi
> 
Well, in this other thread, Paul (Cc-ed) says he basically has tracing working 
on ARM:

http://lists.xenproject.org/archives/html/xen-devel/2016-02/msg03373.html

Any chance you maybe to can cooperate to get such support upstream?
That would be reallyy cool. :-)

Regards,
Dario
--
<> (Raistlin Majere)
-
Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, 
Citrix Systems R&D Ltd., Cambridge (UK)

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable test] 85533: tolerable FAIL

2016-03-06 Thread osstest service owner
flight 85533 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/85533/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 build-i386-rumpuserxen6 xen-buildfail   like 85380
 build-amd64-rumpuserxen   6 xen-buildfail   like 85380
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 85380
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 85380
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 85380
 test-armhf-armhf-xl-rtds 11 guest-start  fail   like 85380

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass

version targeted for testing:
 xen  1bd52e1fd66c47af690124d74d11ccb271c96f6b
baseline version:
 xen  1bd52e1fd66c47af690124d74d11ccb271c96f6b

Last test of basis85533  2016-03-06 05:46:33 Z0 days
Testing same since0  1970-01-01 00:00:00 Z 16867 days0 attempts

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-oldkern  pass
 build-i386-oldkern   pass
 build-amd64-prev pass
 build-i386-prev  pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  fail
 buil

[Xen-devel] [ovmf test] 85550: regressions - FAIL

2016-03-06 Thread osstest service owner
flight 85550 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/85550/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 9 debian-hvm-install fail REGR. vs. 65543
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install fail REGR. vs. 65543

version targeted for testing:
 ovmf 9353c60cea6eeedbbe4b336aea02646e2bf25f47
baseline version:
 ovmf 5ac96e3a28dd26eabee421919f67fa7c443a47f1

Last test of basis65543  2015-12-08 08:45:15 Z   89 days
Failing since 65593  2015-12-08 23:44:51 Z   89 days   93 attempts
Testing same since85550  2016-03-06 09:12:23 Z0 days1 attempts


People who touched revisions under test:
  "Samer El-Haj-Mahmoud" 
  "Yao, Jiewen" 
  Alcantara, Paulo 
  Anbazhagan Baraneedharan 
  Andrew Fish 
  Ard Biesheuvel 
  Arthur Crippa Burigo 
  Cecil Sheng 
  Chao Zhang 
  Charles Duffy 
  Cinnamon Shia 
  Cohen, Eugene 
  Dandan Bi 
  Daocheng Bu 
  Daryl McDaniel 
  David Woodhouse 
  edk2 dev 
  edk2-devel 
  Eric Dong 
  Eric Dong 
  Eugene Cohen 
  Evan Lloyd 
  Feng Tian 
  Fu Siyuan 
  Hao Wu 
  Haojian Zhuang 
  Hess Chen 
  Heyi Guo 
  Jaben Carsey 
  Jeff Fan 
  Jiaxin Wu 
  jiewen yao 
  Jim Dailey 
  jim_dai...@dell.com 
  Jordan Justen 
  Karyne Mayer 
  Larry Hauch 
  Laszlo Ersek 
  Leahy, Leroy P 
  Lee Leahy 
  Leekha Shaveta 
  Leif Lindholm 
  Liming Gao 
  Mark Rutland 
  Marvin Haeuser 
  Michael Kinney 
  Michael LeMay 
  Michael Thomas 
  Ni, Ruiyu 
  Paolo Bonzini 
  Paulo Alcantara 
  Paulo Alcantara Cavalcanti 
  Qin Long 
  Qiu Shumin 
  Rodrigo Dias Correa 
  Ruiyu Ni 
  Ryan Harkin 
  Samer El-Haj-Mahmoud 
  Samer El-Haj-Mahmoud 
  Star Zeng 
  Supreeth Venkatesh 
  Tapan Shah 
  Tian, Feng 
  Vladislav Vovchenko 
  Yao Jiewen 
  Yao, Jiewen 
  Ye Ting 
  Yonghong Zhu 
  Zhang Lubo 
  Zhang, Chao B 
  Zhangfei Gao 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 11926 lines long.)

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [qemu-mainline test] 85573: tolerable FAIL - PUSHED

2016-03-06 Thread osstest service owner
flight 85573 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/85573/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds 11 guest-start   fail REGR. vs. 85382
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 85382

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass

version targeted for testing:
 qemuu1464ad45cd6cdeb0b5c1a54d3d3791396e47e52f
baseline version:
 qemuu3c0f12df65da872d5fbccae469f2cb21ed1c03b7

Last test of basis85382  2016-03-04 23:20:53 Z2 days
Testing same since85573  2016-03-06 12:47:14 Z0 days1 attempts


People who touched revisions under test:
  Daniel P. Berrange 
  Eric Blake 
  Kashyap Chamarthy 
  Markus Armbruster 
  Peter Maydell 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
 test-amd64-amd64-libvirt-xsm pass
 test-armhf-armhf-libvirt-xsm fail
 t

Re: [Xen-devel] [PATCH v6 3/5] IOMMU: Make the pcidevs_lock a recursive one

2016-03-06 Thread Xu, Quan
On March 04, 2016 9:59pm,  wrote:
> On Fri, 2016-03-04 at 11:54 +, Xu, Quan wrote:
> > On March 04, 2016 5:29pm,  wrote:
> > > On March 04, 2016 7:59am,  wrote:
> > >
> > > > Also I'd highlight the below modification:
> > > > -if ( !spin_trylock(&pcidevs_lock) )
> > > > -return -ERESTART;
> > > > -
> > > > +pcidevs_lock();
> > > >
> > > > IMO, it is right too.
> > > Well, I'll have to see where exactly this is (pulling such out of
> > > context is pretty unhelpful), but I suspect it can't be replaced
> > > like this.
> > >
> > Jan, I am looking forward to your review.
> > btw, It is in the assign_device(), in the
> > xen/drivers/passthrough/pci.c file.
> >
> Mmm... If multiple cpus calls assign_device(), and the calls race, the 
> behavior
> between before and after the patch looks indeed different to me.
> 
> In fact, in current code, the cpus that find the lock busy already, would 
> quit the
> function immediately and a continuation is created. On the other hand, with 
> the
> patch, they would spin and actually get the lock, one after the other (if 
> there's
> more of them) at some point.
> 
> Please, double check my reasoning, but I looks to me that it is indeed 
> different
> what happens when the hypercall is restarted (i.e., in current code) and what
> happens if we just let others take the lock and execute the function (i.e., 
> with
> the patch applied).
> 
> I suggest you try to figure out whether that is actually the case. Once you've
> done, feel free to report here and ask for help for finding a solution, if 
> you don't
> see one.
> 

Good idea.
For multiple cpus calls assign_device(), Iet's assume that there are 3 calls in 
parallel:
  (p1). xl pci-attach TestDom :81:00.0
  (p2). xl pci-attach TestDom :81:00.0
  (p3). xl pci-attach TestDom :81:00.0
 
Furthermore, p1 and p2 run on pCPU1, and p3 runs on pCPU2.

After my patch,
__IIUC__ , the invoker flow might be as follow:
pCPU1   pCPU2
 . .
 . .
 assign_device_1()
  {.
   spin_lock_r(lock)  .
   .   assign_device_3() 
spin_lock_r(lock) <-- blocks
   assign_device_2()
 { x <-- spins
   spin_lock_r(lock) <-- can continue x <-- spins
   spin_unlock_r(lock) <-- *doesn't* release lock   x <-- spins
 } x <-- spins
 . x <-- spins
  }x <-- spins
   .   x <-- spins
   spin_unlock_r(lock) <-- release lock --->. ... 
...<--assign_device_3() continue, with lock held
   .   .
   .   .
   .   spin_unlock_r(lock) 
<--lock is now free


Befer my patch,
The invoker flower might return at the point of assign_device_2() / 
assign_device_3().

So, yes, If multiple cpus calls assign_device(), and the calls race, the 
behavior between before and after the patch looks indeed different.

I try to fix it with follow:
patch >>  

--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -118,6 +118,11 @@ int pcidevs_is_locked(void)
 return spin_is_locked(&_pcidevs_lock);
 }

+int pcidevs_trylock(void)
+{
+return spin_trylock_recursive(&_pcidevs_lock);
+}
+
 void __init pt_pci_init(void)
 {
 radix_tree_init(&pci_segments);
@@ -1365,7 +1370,7 @@ static int assign_device(struct domain *d, u16 seg, u8 
bus, u8 devfn, u32 flag)
  p2m_get_hostp2m(d)->global_logdirty)) )
 return -EXDEV;

-if ( !spin_trylock(&pcidevs_lock) )
+if ( !pcidevs_trylock() )
 return -ERESTART;

 rc = iommu_construct(d);
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index 017aa0b..b87571d 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -97,6 +97,7 @@ struct pci_dev {
 void pcidevs_lock(void);
 void pcidevs_unlock(void);
 int pcidevs_is_locked(void);
+int pcidevs_trylock(void);

 bool_t pci_known_segment(u16 seg);
 bool_t pci_device_detect(u16 seg, u8 bus, u8 dev, u8 func);

patch <<  

A quick question, is it '-ERESTART', instead of '-EBUSY' ?

There is also a similar case, cpu_hotplug:
   $cpu_up()--> cpu_hotplug_begin()-->get_cpu_maps()--> 
spin_trylock_recursive(&cpu_add_remove_lock)

Feel free to share your idea, and correct me if I'm wrong.

Quan
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Qemu-devel] RFC: configuring QEMU virtfs for Xen PV(H) guests

2016-03-06 Thread Juergen Gross
Hi Wei,

On 15/02/16 14:44, Wei Liu wrote:
> On Mon, Feb 15, 2016 at 02:33:05PM +0100, Juergen Gross wrote:
>> On 15/02/16 14:16, Wei Liu wrote:
>>> On Mon, Feb 15, 2016 at 09:07:13AM +, Paul Durrant wrote:
>
>>> [...]
> # Option 2: Invent a xen-9p device
>
> Another way of doing it is to expose a dummy xen-9p device, so that we
> can use -fsdev XXX -device xen-9p,YYY.  This simple device should be
> used to capture the parameters like mount_tag and fsdev_id, and then
> chained itself to a known location.  Later Xen transport can traverse
> this known location. This xen-9p device doesn't seem to fit well into
> the hierarchy. The best I can think of its parent should be
> TYPE_DEVICE.  In this case:
>
> 1. Toolstack arranges some xenstore entries.
> 2. Toolstack arranges command line options for QEMU:
>   -fsdev XXX -device xen-9p,XXX
> 3. QEMU starts up in xen-attach mode, scans xenstore for relevant
>entries, then traverses the known location.
>
> Downside: Inventing a dummy device looks suboptimal to me.
>>
>> Sorry, didn't notice this thread before.
>>
> 
> No need to be sorry. I posted this last Friday night. I wouldn't expect
> many replies on Monady.
> 
>> For Xen pvUSB backend in qemu I need a Xen system device acting as
>> parent for being able to attach/detach virtual USB busses.
>>
>> I haven't had time to update my patches for some time, but the patch
>> for this system device is rather easy. It could be used as a parent
>> of the xen-9p devices, too.
>>
>> I've attached the patch for reference.
>>
> 
> Thanks. I will have a look at your patch.

Did you have some time to look at the patch? I'm asking because I
finally found some time to start working on V2 of my qemu based pvUSB
backend. Stefano asked me to hide the system device in my backend and
I want to avoid that in case you are needing it, too.

Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3 21/23] xsplice: Add support for shadow variables

2016-03-06 Thread Martin Pohlack
On 12.02.2016 19:05, Konrad Rzeszutek Wilk wrote:
> From: Ross Lagerwall 
> 
> Shadow variables are a piece of infrastructure to be used by xsplice
> modules. They are used to attach a new piece of data to an existing
> structure in memory.
> 
> Signed-off-by: Ross Lagerwall 
> ---
>  xen/common/Makefile |   1 +
>  xen/common/xsplice_shadow.c | 105 
> 
>  xen/include/xen/xsplice_patch.h |  39 +++
>  3 files changed, 145 insertions(+)
>  create mode 100644 xen/common/xsplice_shadow.c
>  create mode 100644 xen/include/xen/xsplice_patch.h
> 
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index a8ceaff..f4d54ad 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -75,3 +75,4 @@ subdir-$(CONFIG_HAS_DEVICE_TREE) += libfdt
>  
>  obj-$(CONFIG_XSPLICE) += xsplice.o
>  obj-$(CONFIG_XSPLICE) += xsplice_elf.o
> +obj-$(CONFIG_XSPLICE) += xsplice_shadow.o
> diff --git a/xen/common/xsplice_shadow.c b/xen/common/xsplice_shadow.c
> new file mode 100644
> index 000..619cdee
> --- /dev/null
> +++ b/xen/common/xsplice_shadow.c
> @@ -0,0 +1,105 @@
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#define SHADOW_SLOTS 256

Using something very round here will give you lot's of hash collisions
at the price of a very fast hash computation as compilers, linkers, and
memory allocators tend to align starting addresses.  I would suggest to
use a small prime here, e.g., 257 or 251, to have a first approximation
of a simple hash function.

Or use existing hash infrastructure (see below).

> +struct hlist_head shadow_tbl[SHADOW_SLOTS];
> +static DEFINE_SPINLOCK(shadow_lock);
> +
> +struct shadow_var {
> +struct hlist_node list; /* Linked to 'shadow_tbl' */
> +void *data;
> +const void *obj;
> +char var[16];
> +};
> +
> +void *xsplice_shadow_alloc(const void *obj, const char *var, size_t size)
> +{
> +struct shadow_var *shadow;
> +unsigned int slot;
> +
> +shadow = xmalloc(struct shadow_var);
> +if ( !shadow )
> +return NULL;
> +
> +shadow->obj = obj;
> +strlcpy(shadow->var, var, sizeof shadow->var);
> +shadow->data = xmalloc_bytes(size);
> +if ( !shadow->data )
> +{
> +xfree(shadow);
> +return NULL;
> +}
> +
> +slot = (unsigned long)obj % SHADOW_SLOTS;

hash.h has an earlier import from Linux and provides hash_long().  That
looks like it would not suffer from direct hash collisions.

(also for all other occurrences of "obj % SHADOW_SLOTS" below)

> +spin_lock(&shadow_lock);
> +hlist_add_head(&shadow->list, &shadow_tbl[slot]);
> +spin_unlock(&shadow_lock);
> +
> +return shadow->data;
> +}
> +
> +void xsplice_shadow_free(const void *obj, const char *var)
> +{
> +struct shadow_var *entry, *shadow = NULL;
> +unsigned int slot;
> +struct hlist_node *next;
> +
> +slot = (unsigned long)obj % SHADOW_SLOTS;
> +
> +spin_lock(&shadow_lock);
> +hlist_for_each_entry(entry, next, &shadow_tbl[slot], list)
> +{
> +if ( entry->obj == obj &&
> + !strcmp(entry->var, var) )
> +{
> +shadow = entry;
> +break;
> +}
> +}
> +if (shadow)
> +{
> +hlist_del(&shadow->list);
> +xfree(shadow->data);
> +xfree(shadow);
> +}
> +spin_unlock(&shadow_lock);
> +}
> +
> +void *xsplice_shadow_get(const void *obj, const char *var)
> +{
> +struct shadow_var *entry;
> +unsigned int slot;
> +struct hlist_node *next;
> +void *ret = NULL;
> +
> +slot = (unsigned long)obj % SHADOW_SLOTS;
> +
> +spin_lock(&shadow_lock);
> +hlist_for_each_entry(entry, next, &shadow_tbl[slot], list)
> +{
> +if ( entry->obj == obj &&
> + !strcmp(entry->var, var) )
> +{
> +ret = entry->data;
> +break;
> +}
> +}
> +
> +spin_unlock(&shadow_lock);
> +return ret;
> +}
> +
> +static int __init xsplice_shadow_init(void)
> +{
> +int i;
> +
> +for ( i = 0; i < SHADOW_SLOTS; i++ )
> +INIT_HLIST_HEAD(&shadow_tbl[i]);
> +
> +return 0;
> +}
> +__initcall(xsplice_shadow_init);
> diff --git a/xen/include/xen/xsplice_patch.h b/xen/include/xen/xsplice_patch.h
> new file mode 100644
> index 000..e3f344b
> --- /dev/null
> +++ b/xen/include/xen/xsplice_patch.h
> @@ -0,0 +1,39 @@
> +#ifndef __XEN_XSPLICE_PATCH_H__
> +#define __XEN_XSPLICE_PATCH_H__
> +
> +/*
> + * The following definitions are to be used in patches. They are taken
> + * from kpatch.
> + */
> +
> +/*
> + * xsplice shadow variables
> + *
> + * These functions can be used to add new "shadow" fields to existing data
> + * structures.  For example, to allocate a "newpid" variable associated with 
> an
> + * instance of task_struct, and assign it a value of 1000:
> + *
> + * struct task_struct *tsk = current;
> + * int *newpid;
> + * newpid = xsplice_shadow_alloc(tsk, "