[Xen-devel] [xen-unstable test] 33326: regressions - trouble: broken/fail/pass

2015-01-11 Thread xen . org
flight 33326 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/33326/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl   3 host-install(3) broken REGR. vs. 33112
 test-amd64-i386-qemuu-rhel6hvm-intel  5 xen-boot  fail REGR. vs. 33112

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-pair17 guest-migrate/src_host/dst_host fail like 33112

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt  9 guest-start  fail   never pass
 test-armhf-armhf-libvirt  9 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-amd   9 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel  9 guest-start  fail  never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start fail never pass
 test-amd64-i386-libvirt   9 guest-start  fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-winxpsp3 14 guest-stopfail never pass
 test-amd64-i386-xl-qemuu-winxpsp3 14 guest-stopfail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop  fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop   fail never pass
 test-amd64-i386-xl-winxpsp3  14 guest-stop   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop   fail never pass

version targeted for testing:
 xen  877eda3223161b995feacce8d2356ced1f627fa8
baseline version:
 xen  36174af3fbeb1b662c0eadbfa193e77f68cc955b


People who touched revisions under test:
  Andrew Cooper 
  Boris Ostrovsky 
  Chao Peng 
  Ed Swierk 
  Ian Campbell 
  Ian Campbell 
  Ian Jackson 
  Ian Jackson 
  Jan Beulich 
  Juergen Gross 
  Julien Grall 
  Karim Allah Ahmed 
  Keir Fraser 
  Kevin Tian 
  Konrad Rzeszutek Wilk 
  Liang Li 
  Mihai Donțu 
  Olaf Hering 
  Paul Durrant 
  Robert Hu 
  Răzvan Cojocaru 
  Stefano Stabellini 
  Thomas Leonard 
  Tim Deegan 
  Vijaya Kumar K 
  Wei Liu 
  Wei Ye 
  Yang Hongyang 
  Yang Zhang 
  Yu Zhang 


jobs:
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-oldkern  pass
 build-i386-oldkern   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  pass
 build-i386-rumpuserxen   pass
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  broken
 test-amd64-i386-xl   pass
 test-amd64-amd64-xl-pvh-amd  fail
 test-amd64-i386-rhel6hvm-amd pass
 test-amd64-i386-qemut-rhel6hvm-amd   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64pass
 test-amd64-i386-xl-qemut-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-i386-freebsd10-amd64  pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass
 test-amd64-amd64-rumpuserxen-amd64   pass
 test-amd64-amd64-xl-qemut-win7-amd64 fail
 t

[Xen-devel] [qemu-mainline bisection] complete test-amd64-amd64-xl-qemuu-winxpsp3

2015-01-11 Thread xen . org
branch xen-unstable
xen branch xen-unstable
job test-amd64-amd64-xl-qemuu-winxpsp3
test windows-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-unstable.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  49d2e648e8087d154d8bf8b91f27c8e05e79d5a6
  Bug not present: 60fb1a87b47b14e4ea67043aa56f353e77fbd70a


  commit 49d2e648e8087d154d8bf8b91f27c8e05e79d5a6
  Author: Marcel Apfelbaum 
  Date:   Tue Dec 16 16:58:05 2014 +
  
  machine: remove qemu_machine_opts global list
  
  QEMU has support for options per machine, keeping
  a global list of options is no longer necessary.
  
  Signed-off-by: Marcel Apfelbaum 
  Reviewed-by: Alexander Graf 
  Reviewed-by: Greg Bellows 
  Message-id: 1418217570-15517-2-git-send-email-marce...@redhat.com
  Signed-off-by: Peter Maydell 


For bisection revision-tuple graph see:
   
http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.qemu-mainline.test-amd64-amd64-xl-qemuu-winxpsp3.windows-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Searching for failure / basis pass:
 33123 fail [host=gall-mite] / 32598 ok.
Failure / basis pass flights: 33123 / 32598
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-unstable.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 83a926f7a4e39fb6be0576024e67fe161593defa 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b0d42741f8e9a00854c3b3faca1da84bfc69bf22 
ab0302ee764fd702465aef6d88612cdff4302809 
36174af3fbeb1b662c0eadbfa193e77f68cc955b
Basis pass 83a926f7a4e39fb6be0576024e67fe161593defa 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b0d42741f8e9a00854c3b3faca1da84bfc69bf22 
7e58e2ac7778cca3234c33387e49577bb7732714 
36174af3fbeb1b662c0eadbfa193e77f68cc955b
Generating revisions with ./adhoc-revtuple-generator  
git://xenbits.xen.org/linux-pvops.git#83a926f7a4e39fb6be0576024e67fe161593defa-83a926f7a4e39fb6be0576024e67fe161593defa
 
git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860
 
git://xenbits.xen.org/staging/qemu-xen-unstable.git#b0d42741f8e9a00854c3b3faca1da84bfc69bf22-b0d42741f8e9a00854c3b3faca1da84bfc69bf22
 
git://git.qemu.org/qemu.git#7e58e2ac7778cca3234c33387e49577bb7732714-ab0302ee764fd702465aef6d88612cdff4302809
 
git://xenbits.xen.org/xen.git#36174af3fbeb1b662c0eadbfa193e77f68cc955b-36174af3fbeb1b662c0eadbfa193e77f68cc955b
+ exec
+ sh -xe
+ cd /export/home/osstest/repos/qemu
+ git remote set-url origin 
git://drall.uk.xensource.com:9419/git://git.qemu.org/qemu.git
+ git fetch -p origin +refs/heads/*:refs/remotes/origin/*
+ exec
+ sh -xe
+ cd /export/home/osstest/repos/qemu
+ git remote set-url origin 
git://drall.uk.xensource.com:9419/git://git.qemu.org/qemu.git
+ git fetch -p origin +refs/heads/*:refs/remotes/origin/*
Loaded 1005 nodes in revision graph
Searching for test results:
 32585 pass irrelevant
 32598 pass 83a926f7a4e39fb6be0576024e67fe161593defa 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b0d42741f8e9a00854c3b3faca1da84bfc69bf22 
7e58e2ac7778cca3234c33387e49577bb7732714 
36174af3fbeb1b662c0eadbfa193e77f68cc955b
 32611 fail 83a926f7a4e39fb6be0576024e67fe161593defa 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b0d42741f8e9a00854c3b3faca1da84bfc69bf22 
ab0302ee764fd702465aef6d88612cdff4302809 
36174af3fbeb1b662c0eadbfa193e77f68cc955b
 32626 fail 83a926f7a4e39fb6be0576024e67fe161593defa 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b0d42741f8e9a00854c3b3faca1da84bfc69bf22 
ab0302ee764fd702465aef6d88612cdff4302809 
36174af3fbeb1b662c0eadbfa193e77f68cc955b
 32689 fail 83a926f7a4e39fb6be0576024e67fe161593defa 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b0d42741f8e9a00854c3b3faca1da84bfc69bf22 
ab0302ee764fd702465aef6d88612cdff4302809 
36174af3fbeb1b662c0eadbfa193e77f68cc955b
 32659 fail 83a926f7a4e39fb6be0576024e67fe161593defa 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b0d42741f8e9a00854c3b3faca1da84bfc69bf22 
ab0302ee764fd702465aef6d88612cdff4302809 
36174af3fbeb1b662c0eadbfa193e77f68cc955b
 32876 fail 83a926f7a4e39fb6be0576024e67fe161593defa 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b0d42741f8e9a00854c3b3faca1da84bfc69bf22 
ab0302ee764fd702465aef6d88612cdff4302809 
36174af3fbeb1b662c0eadbfa193e77f68cc955b
 32854 fail 83a926f7a4e39fb6be0576024e67fe161593defa 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b0d42741f8e9a00854c3b3faca1da84bfc69bf22 
ab0302ee764fd702465aef6d88612cdff4302809 
36174af3fbeb1b662c0eadbfa193e77f68cc955b
 32908 fail 83a926f7a4e39fb6be0576024e67fe161593d

[Xen-devel] [qemu-mainline test] 33328: trouble: broken/fail/pass

2015-01-11 Thread xen . org
flight 33328 qemu-mainline real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/33328/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3   3 host-install(3) broken REGR. vs. 32598
 test-amd64-amd64-xl-qemuu-ovmf-amd64  3 host-install(3) broken REGR. vs. 32598
 test-amd64-i386-rhel6hvm-amd  3 host-install(3) broken REGR. vs. 32598

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)  broken REGR. vs. 32598
 test-amd64-i386-pair17 guest-migrate/src_host/dst_host fail like 32598

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-intel  9 guest-start  fail  never pass
 test-amd64-i386-libvirt   9 guest-start  fail   never pass
 test-amd64-amd64-libvirt  9 guest-start  fail   never pass
 test-armhf-armhf-xl  10 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt  9 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-amd   9 guest-start  fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop  fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop   fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop  fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop   fail never pass
 test-amd64-i386-xl-qemut-winxpsp3 14 guest-stopfail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop   fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3 14 guest-stopfail never pass

version targeted for testing:
 qemuuf1c5831ca3e3eafb89331233221768b64db113e8
baseline version:
 qemuu7e58e2ac7778cca3234c33387e49577bb7732714


People who touched revisions under test:
  Alex Williamson 
  Amit Shah 
  David Gibson 
  Eric Auger 
  Fabian Aggeler 
  Frank Blaschka 
  Greg Bellows 
  Jiri Pirko 
  Kim Phillips 
  Laszlo Ersek 
  Marcel Apfelbaum 
  Marcel Apfelbaum 
  Michael Walle 
  Paolo Bonzini 
  Pavel Dovgalyuk 
  Peter Maydell 
  Peter Wu 
  Scott Feldman 


jobs:
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-amd64-xl-pvh-amd  fail
 test-amd64-i386-rhel6hvm-amd broken  
 test-amd64-i386-qemut-rhel6hvm-amd   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64pass
 test-amd64-i386-xl-qemut-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-i386-freebsd10-amd64  pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 broken  
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass
 test-amd64-amd64-xl-qemut-win7-amd64 fail
 test-amd64-i386-xl-qemut-win7-amd64  fail
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-xl-win7-amd64   fail
 test-amd64-i386-xl-win7-amd64   

[Xen-devel] [rumpuserxen test] 33349: all pass - PUSHED

2015-01-11 Thread xen . org
flight 33349 rumpuserxen real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/33349/

Perfect :-)
All tests in this flight passed
version targeted for testing:
 rumpuserxen  34ecfdbad080f093fa0107042d5ca7f10986f324
baseline version:
 rumpuserxen  d01abc70b9ea24b6230aa54137e2ff256604ae28


People who touched revisions under test:
  Antti Kantee 
  Justin Cormack 
  Martin Lucina 


jobs:
 build-amd64  pass
 build-i386   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  pass
 build-i386-rumpuserxen   pass
 test-amd64-amd64-rumpuserxen-amd64   pass
 test-amd64-i386-rumpuserxen-i386 pass



sg-report-flight on osstest.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=rumpuserxen
+ revision=34ecfdbad080f093fa0107042d5ca7f10986f324
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push rumpuserxen 
34ecfdbad080f093fa0107042d5ca7f10986f324
+ branch=rumpuserxen
+ revision=34ecfdbad080f093fa0107042d5ca7f10986f324
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock 
']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=rumpuserxen
+ xenbranch=xen-unstable
+ '[' xrumpuserxen = xlinux ']'
+ linuxbranch=
+ '[' x = x ']'
+ qemuubranch=qemu-upstream-unstable
+ : tested/2.6.39.x
+ . ap-common
++ : osst...@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xensource.com:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : https://github.com/rumpkernel/rumprun-xen
++ : git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : osst...@xenbits.xensource.com:/home/xen/git/rumpuser-xen.git
+++ besteffort_repo https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ cached_repo https://github.com/rumpkernel/rumpkernel-netbsd-src 
'[fetch=try]'
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local 'options=[fetch=try]'
 getconfig GitCacheProxy
 perl -e '
use Osstest;
readglobalconfig();
print $c{"GitCacheProxy"} or die $!;
'
+++ local cache=git://drall.uk.xensource.com:9419/
+++ '[' xgit://drall.uk.xensource.com:9419/ '!=' x ']'
+++ echo 
'git://drall.uk.xensource.com:9419/https://github.com/rumpkernel/rumpkernel-netbsd-src%20[fetch=try]'
++ : 
'git://drall.uk.xensource.com:9419/https://github.com/rumpkernel/rumpkernel-netbsd-src%20[fetch=try]'
++ : git
++ : git://git.seabios.org/seabios.git
++ : osst...@xenbits.xensource.com:/home/xen/git/osstest/seabios.git
++ : git://xenbits.xen.org/osstest/seabios.git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osst...@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osst...@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.14
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.rumpuserxen
++

[Xen-devel] [linux-linus test] 33337: regressions - trouble: blocked/broken/fail/pass

2015-01-11 Thread xen . org
flight 7 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/7/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf   3 host-install(3) broken REGR. vs. 32879
 test-amd64-i386-libvirt   5 xen-boot  fail REGR. vs. 32879
 test-amd64-i386-rumpuserxen-i386  5 xen-boot  fail REGR. vs. 32879
 test-amd64-i386-rhel6hvm-amd  5 xen-boot  fail REGR. vs. 32879
 test-amd64-i386-qemut-rhel6hvm-amd  5 xen-bootfail REGR. vs. 32879
 test-amd64-i386-qemuu-rhel6hvm-amd  5 xen-bootfail REGR. vs. 32879
 test-amd64-i386-freebsd10-i386  5 xen-bootfail REGR. vs. 32879
 test-amd64-i386-qemut-rhel6hvm-intel  5 xen-boot  fail REGR. vs. 32879
 test-amd64-i386-xl-multivcpu  5 xen-boot  fail REGR. vs. 32879
 test-amd64-i386-xl5 xen-boot  fail REGR. vs. 32879
 test-amd64-i386-qemuu-rhel6hvm-intel  5 xen-boot  fail REGR. vs. 32879
 test-amd64-i386-freebsd10-amd64  5 xen-boot   fail REGR. vs. 32879
 test-amd64-i386-xl-credit25 xen-boot  fail REGR. vs. 32879
 test-amd64-i386-rhel6hvm-intel  5 xen-bootfail REGR. vs. 32879
 build-armhf-pvops 3 host-install(3) broken REGR. vs. 32879
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  5 xen-boot  fail REGR. vs. 32879
 test-amd64-i386-xl-qemuu-winxpsp3  5 xen-boot fail REGR. vs. 32879
 test-amd64-i386-xl-qemuu-ovmf-amd64  5 xen-boot   fail REGR. vs. 32879
 test-amd64-i386-xl-qemut-debianhvm-amd64  5 xen-boot  fail REGR. vs. 32879
 test-amd64-i386-xl-qemuu-debianhvm-amd64  5 xen-boot  fail REGR. vs. 32879
 test-amd64-i386-xl-qemut-win7-amd64  5 xen-boot   fail REGR. vs. 32879
 test-amd64-i386-xl-win7-amd64  5 xen-boot fail REGR. vs. 32879
 test-amd64-i386-xl-qemuu-win7-amd64  5 xen-boot   fail REGR. vs. 32879
 test-amd64-i386-xl-winxpsp3-vcpus1  5 xen-bootfail REGR. vs. 32879
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  5 xen-boot  fail REGR. vs. 32879
 test-amd64-i386-pair  8 xen-boot/dst_host fail REGR. vs. 32879
 test-amd64-i386-pair  7 xen-boot/src_host fail REGR. vs. 32879
 test-amd64-i386-xl-winxpsp3   5 xen-boot  fail REGR. vs. 32879
 test-amd64-i386-xl-qemut-winxpsp3  5 xen-boot fail REGR. vs. 32879

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install  fail like 32879

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-intel  9 guest-start  fail  never pass
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  9 guest-start  fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start fail never pass
 test-amd64-amd64-xl-pvh-amd   9 guest-start  fail   never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop fail never pass

version targeted for testing:
 linuxeb74926920cfa756087a82e0b081df837177cb95
baseline version:
 linux9bb29b6b927bcd79cf185ee67bcebfe630f0dea1


People who touched revisions under test:
  "John W. Linville" 
  Aaron Brown 
  Aaron Plattner 
  Alan Stern 
  Alex Deucher 
  Alexandre Courbot 
  Alexey Khoroshilov 
  Andrew Jackson 
  Andrew Morton 
  Andy Shevchenko 
  Anil Chintalapati (achintal) 
  Anil Chintalapati 
  Anton Vorontsov 
  Antonio Quartulli 
  Ard Biesheuvel 
  Arne Goedeke 
  Aron Szabo 
  Ben Goz 
  Ben Pfaff 
  Ben Skeggs 
  Benjamin Tissoires 
  Bruno Prémont 
  Catalin Marinas 
  Chris Mason 
  Christian König 
  Christoph Hellwig 
  Corey Minyard 
  Dan Carpenter 
  Daniel Borkmann 
  Daniel Mack 
  Daniel Nicoletti 
  Daniel Thompson 
  Daniel Walter 
  Dave Airlie 
  Dave Airlie 
  David Drysdale 
  David Howells 
  David Rientjes 
  David S. Miller 
  Doug Anderson 
  Fabian Frederick 
  Fang, Yang A 
  Felipe Balbi 
  Filipe Manana 
  Francesco Virlinzi 
  Giedrius Statkevičius 
  Govindarajulu Varadarajan <_gov...@gmx.com>
  Hanjun Guo 
  Hanjun Guo 
  Hannes Reinecke 
  Hans de Goede 
  Hari Bathini 
  Hayes Wang 
  hayeswang 
  Henrik Rydberg 
  Herbert Xu 
  Hiral Shah 
  Holger Hoffstätte 
  Ian Cam

[Xen-devel] [xen-4.3-testing test] 33339: FAIL

2015-01-11 Thread xen . org
flight 9 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/9/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf  3 host-install(3) broken in 33246 REGR. vs. 32282
 build-armhf-pvops3 host-install(3) broken in 33246 REGR. vs. 32282

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64  7 windows-install fail pass in 33246

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install  fail like 32282

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 debian-hvm-install  fail never pass
 test-amd64-i386-libvirt   9 guest-start  fail   never pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64  7 debian-hvm-install fail never pass
 test-amd64-amd64-libvirt  9 guest-start  fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start fail never pass
 build-amd64-rumpuserxen   6 xen-buildfail   never pass
 test-armhf-armhf-libvirt  5 xen-boot fail   never pass
 test-armhf-armhf-xl   5 xen-boot fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/checkfail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop  fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop   fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop   fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check fail  never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop  fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop fail never pass
 build-i386-rumpuserxen6 xen-buildfail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop fail never pass
 build-armhf-libvirt   1 build-check(1)blocked in 33246 n/a
 test-armhf-armhf-libvirt  1 build-check(1)blocked in 33246 n/a
 test-armhf-armhf-xl   1 build-check(1)blocked in 33246 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stopfail in 33246 never pass

version targeted for testing:
 xen  5d4e3ff19c33770ce01bec949c50326b11088fef
baseline version:
 xen  5cd7ed02530eb86ffee6f5b9c7f04743c726754f


People who touched revisions under test:
  Mihai Donțu 
  Răzvan Cojocaru 


jobs:
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  fail
 build-i386-rumpuserxen   fail
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  fail
 test-amd64-i386-xl   pass
 test-amd64-i386-rhel6hvm-amd pass
 test-amd64-i386-qemut-rhel6hvm-amd   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64pass
 test-amd64-i386-xl-qemut-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-i386-freebsd10-amd64  pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail
 test-amd64-amd64-rumpuserxen-amd64   blocked
 test-amd64-amd64-xl-qemut-win7-amd64 fail
 t

Re: [Xen-devel] [Intel-gfx] [Announcement] 2014-Q4 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2015-01-11 Thread Jike Song

Whoops. Changed the title from "2015-Q1" to be "2014-Q4" :)

--
Thanks,
Jike


On 01/09/2015 04:51 PM, Jike Song wrote:

Hi all,

We're pleased to announce a public update to Intel Graphics Virtualization 
Technology (Intel GVT-g, formerly known as XenGT). Intel GVT-g is a complete 
vGPU solution with mediated pass-through, supported today on 4th generation 
Intel Core(TM) processors with Intel Graphics processors. A virtual GPU 
instance is maintained for each VM, with part of performance critical resources 
directly assigned. The capability of running native graphics driver inside a 
VM, without hypervisor intervention in performance critical paths, achieves a 
good balance among performance, feature, and sharing capability. Though we only 
support Xen on Intel Processor Graphics so far, the core logic can be easily 
ported to other hypervisors.   The XenGT project should be considered a work in 
progress, As such it is not a complete product nor should it be considered 
one., Extra care should be taken when testing and configuring a system to use 
the XenGT project.

The news of this update:

- kernel update from 3.14.1 to drm-intel 3.17.0.
- We plan to integrate Intel GVT-g as a feature in i915 driver. That 
effort is still under review, not included in this update yet.
- Next update will be around early Apr, 2015.

This update consists of:

- Including some bug fixes and stability enhancement.
- Making XenGT device model to be aware of Broadwell. In this version 
BDW is not yet functioning.
- Available Fence registers number is changed to 32 from 16 to align 
with HSW hardware.
- New cascade interrupt framework for supporting interrupt 
virtualization on both Haswell and Broadwell.
- Add back the gem_vgtbuffer. The previous release did not build that 
module for 3.14 kernel. In this release, the module is back and rebased to 3.17.
- Enable the irq based context switch in vgt driver, which will help 
reduce the cpu utilization while doing context switch, it is enabled by 
default, and can be turn off by kernel flag irq_based_ctx_switch.


Please refer to the new setup guide, which provides step-to-step details about 
building/configuring/running Intel GVT-g:


https://github.com/01org/XenGT-Preview-kernel/blob/master/XenGT_Setup_Guide.pdf

The new source codes are available at the updated github repos:

Linux: https://github.com/01org/XenGT-Preview-kernel.git
Xen: https://github.com/01org/XenGT-Preview-xen.git
Qemu: https://github.com/01org/XenGT-Preview-qemu.git


More information about Intel GVT-g background, architecture, etc can be found 
at:



https://www.usenix.org/conference/atc14/technical-sessions/presentation/tian

http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-v7_0.pdf
https://01.org/xen/blogs/srclarkx/2013/graphics-virtualization-xengt



The previous update can be found here:


http://lists.xen.org/archives/html/xen-devel/2014-12/msg00474.html



Appreciate your comments!



--
Thanks,
Jike


On 12/04/2014 10:45 AM, Jike Song wrote:

Hi all,

We're pleased to announce a public release to Intel Graphics Virtualization 
Technology (Intel GVT-g, formerly known as XenGT). Intel GVT-g is a complete 
vGPU solution with mediated pass-through, supported today on 4th generation 
Intel Core(TM) processors with Intel Graphics processors. A virtual GPU 
instance is maintained for each VM, with part of performance critical resources 
directly assigned. The capability of running native graphics driver inside a 
VM, without hypervisor intervention in performance critical paths, achieves a 
good balance among performance, feature, and sharing capability. Though we only 
support Xen on Intel Processor Graphics so far, the core logic can be easily 
ported to other hypervisors.


The news of this update:


- kernel update from 3.11.6 to 3.14.1

- We plan to integrate Intel GVT-g as a feature in i915 driver. That 
effort is still under review, not included in this update yet

- Next update will be around early Jan, 2015


This update consists of:

- Windows HVM support with driver version 15.33.3910

- Stability fixes, e.g. stabilize GPU, the GPU hang occurrence rate 
becomes rare now

- Hardware Media Acceleration for Decoding/Encoding/Transcoding, VC1, 
H264 etc. format supporting

- Display enhancements, e.g. DP type is supported for virtual PORT

- Display port capability virtualization: with this feature, dom0 
manager could freely assign virtual DDI ports to VM, not necessary to check 
whether the corresponding physical DDI ports are available



Please refer to the new setup guide, which provides step-to-step details about 
building/configuring/running Intel GVT-g:



https://github.com/01org/XenGT-Preview-kernel/blob/master/XenGT_Setup_Guide.pdf



The new sou

[Xen-devel] [Patch V2 3/4] xen: use correct type for physical addresses

2015-01-11 Thread Juergen Gross
When converting a pfn to a physical address be sure to use 64 bit
wide types or convert the physical address to a pfn if possible.

Signed-off-by: Juergen Gross 
Tested-by: Boris Ostrovsky 
---
 arch/x86/xen/setup.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index feb6d86..410210f 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -140,7 +140,7 @@ static void __init xen_del_extra_mem(u64 start, u64 size)
 unsigned long __ref xen_chk_extra_mem(unsigned long pfn)
 {
int i;
-   unsigned long addr = PFN_PHYS(pfn);
+   phys_addr_t addr = PFN_PHYS(pfn);
 
for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) {
if (addr >= xen_extra_mem[i].start &&
@@ -284,7 +284,7 @@ static void __init xen_update_mem_tables(unsigned long pfn, 
unsigned long mfn)
}
 
/* Update kernel mapping, but not for highmem. */
-   if ((pfn << PAGE_SHIFT) >= __pa(high_memory))
+   if (pfn >= PFN_UP(__pa(high_memory - 1)))
return;
 
if (HYPERVISOR_update_va_mapping((unsigned long)__va(pfn << PAGE_SHIFT),
-- 
2.1.2


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [Patch V2 4/4] xen: check for zero sized area when invalidating memory

2015-01-11 Thread Juergen Gross
With the introduction of the linear mapped p2m list setting memory
areas to "invalid" had to be delayed. When doing the invalidation
make sure no zero sized areas are processed.

Signed-off-by: Juegren Gross 
---
 arch/x86/xen/setup.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 410210f..865e56c 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -160,6 +160,8 @@ void __init xen_inv_extra_mem(void)
int i;
 
for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) {
+   if (!xen_extra_mem[i].size)
+   continue;
pfn_s = PFN_DOWN(xen_extra_mem[i].start);
pfn_e = PFN_UP(xen_extra_mem[i].start + xen_extra_mem[i].size);
for (pfn = pfn_s; pfn < pfn_e; pfn++)
-- 
2.1.2


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [Patch V2 2/4] xen: correct race in alloc_p2m_pmd()

2015-01-11 Thread Juergen Gross
When allocating a new pmd for the linear mapped p2m list a check is
done for not introducing another pmd when this just happened on
another cpu. In this case the old pte pointer was returned which
points to the p2m_missing or p2m_identity page. The correct value
would be the pointer to the found new page.

Signed-off-by: Juergen Gross 
---
 arch/x86/xen/p2m.c | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 36ae094..fdb996e 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -440,10 +440,9 @@ EXPORT_SYMBOL_GPL(get_phys_to_machine);
  * a new pmd is to replace p2m_missing_pte or p2m_identity_pte by a individual
  * pmd. In case of PAE/x86-32 there are multiple pmds to allocate!
  */
-static pte_t *alloc_p2m_pmd(unsigned long addr, pte_t *ptep, pte_t *pte_pg)
+static pte_t *alloc_p2m_pmd(unsigned long addr, pte_t *pte_pg)
 {
pte_t *ptechk;
-   pte_t *pteret = ptep;
pte_t *pte_newpg[PMDS_PER_MID_PAGE];
pmd_t *pmdp;
unsigned int level;
@@ -477,8 +476,6 @@ static pte_t *alloc_p2m_pmd(unsigned long addr, pte_t 
*ptep, pte_t *pte_pg)
if (ptechk == pte_pg) {
set_pmd(pmdp,
__pmd(__pa(pte_newpg[i]) | _KERNPG_TABLE));
-   if (vaddr == (addr & ~(PMD_SIZE - 1)))
-   pteret = pte_offset_kernel(pmdp, addr);
pte_newpg[i] = NULL;
}
 
@@ -492,7 +489,7 @@ static pte_t *alloc_p2m_pmd(unsigned long addr, pte_t 
*ptep, pte_t *pte_pg)
vaddr += PMD_SIZE;
}
 
-   return pteret;
+   return lookup_address(addr, &level);
 }
 
 /*
@@ -521,7 +518,7 @@ static bool alloc_p2m(unsigned long pfn)
 
if (pte_pg == p2m_missing_pte || pte_pg == p2m_identity_pte) {
/* PMD level is missing, allocate a new one */
-   ptep = alloc_p2m_pmd(addr, ptep, pte_pg);
+   ptep = alloc_p2m_pmd(addr, pte_pg);
if (!ptep)
return false;
}
-- 
2.1.2


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [Patch V2 0/4] xen: correct several bugs in new p2m list setup

2015-01-11 Thread Juergen Gross
In the setup code of the linear mapped p2m list several bugs have
been found, especially for 32 bit dom0. These patches correct the
errors and make 32 bit dom0 bootable again.

Changes since V1:
- split up patch 3 as requested by David Vrabel
- use phys_addr_t instead of u64 as requested by Jan Beulich
- compare pfns instead physical addresses as suggested by Jan Beulich

Juergen Gross (4):
  xen: correct error for building p2m list on 32 bits
  xen: correct race in alloc_p2m_pmd()
  xen: use correct type for physical addresses
  xen: check for zero sized area when invalidating memory

 arch/x86/xen/p2m.c   | 11 ---
 arch/x86/xen/setup.c |  6 --
 2 files changed, 8 insertions(+), 9 deletions(-)

-- 
2.1.2


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [Patch V2 1/4] xen: correct error for building p2m list on 32 bits

2015-01-11 Thread Juergen Gross
In xen_rebuild_p2m_list() for large areas of invalid or identity
mapped memory the pmd entries on 32 bit systems are initialized
wrong. Correct this error.

Suggested-by: Boris Ostrovsky 
Signed-off-by: Juergen Gross 
---
 arch/x86/xen/p2m.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index d9660a5..36ae094 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -379,7 +379,7 @@ static void __init xen_rebuild_p2m_list(unsigned long *p2m)
p2m_missing_pte : p2m_identity_pte;
for (i = 0; i < PMDS_PER_MID_PAGE; i++) {
pmdp = populate_extra_pmd(
-   (unsigned long)(p2m + pfn + i * PTRS_PER_PTE));
+   (unsigned long)(p2m + pfn) + i * PMD_SIZE);
set_pmd(pmdp, __pmd(__pa(ptep) | _KERNPG_TABLE));
}
}
-- 
2.1.2


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [libvirt test] 33354: regressions - FAIL

2015-01-11 Thread xen . org
flight 33354 libvirt real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/33354/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt   5 libvirt-build fail REGR. vs. 32648
 build-i386-libvirt5 libvirt-build fail REGR. vs. 32648

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  9 guest-start  fail   never pass

version targeted for testing:
 libvirt  97fac17c77d9bdfacafff1c5c39b2df3c1530614
baseline version:
 libvirt  2360fe5d24175835d3f5fd1c7e8e6e13addab629


People who touched revisions under test:
  Alexander Burluka 
  Cedric Bosdonnat 
  Chunyan Liu 
  Cédric Bosdonnat 
  Daniel P. Berrange 
  Eric Blake 
  Geoff Hickey 
  Jim Fehlig 
  Jiri Denemark 
  John Ferlan 
  Ján Tomko 
  Kiarie Kahurani 
  Luyao Huang 
  Michal Privoznik 
  Nehal J Wani 
  Pavel Hrdina 
  Peter Krempa 
  Stefan Berger 


jobs:
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  fail
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt fail
 test-armhf-armhf-libvirt blocked
 test-amd64-i386-libvirt  blocked



sg-report-flight on osstest.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 680 lines long.)

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC V9 2/4] domain snapshot overview

2015-01-11 Thread Chun Yan Liu


>>> On 1/8/2015 at 08:26 PM, in message <1420719995.19787.62.ca...@citrix.com>, 
>>> Ian
Campbell  wrote: 
> On Mon, 2014-12-22 at 20:42 -0700, Chun Yan Liu wrote: 
> >  
> > >>> On 12/19/2014 at 06:25 PM, in message  
> <1418984720.20028.15.ca...@citrix.com>, 
> > Ian Campbell  wrote:  
> > > On Thu, 2014-12-18 at 22:45 -0700, Chun Yan Liu wrote:  
> > > >   
> > > > >>> On 12/18/2014 at 11:10 PM, in message   
> > > <1418915443.11882.86.ca...@citrix.com>,  
> > > > Ian Campbell  wrote:   
> > > > > On Tue, 2014-12-16 at 14:32 +0800, Chunyan Liu wrote:   
> > > > > > Changes to V8:   
> > > > > >   * add an overview document, so that one can has a overall look   
> > > > > > about the whole domain snapshot work, limits, requirements,   
> > > > > > how to do, etc.   
> > > > > >
> > > > > > =
> > > > > >
> > > > > > Domain snapshot overview   
> > > > >
> > > > > I don't see a similar section for disk snapshots, are you not   
> > > > > considering those here except as a part of a domain snapshot or is 
> > > > > this   
>  
> > > > > an oversight?   
> > > > >
> > > > > There are three main use cases (that I know of at least) for   
> > > > > snapshotting like behaviour.   
> > > > >
> > > > > One is as you've mentioned below for "backup", i.e. to preserve the 
> > > > > VM   
> > > > > at a certain point in time in order to be able to roll back to it. Is 
> > > > >   
> > > > > this the only usecase you are considering?   
> > > >   
> > > > Yes. I didn't take disk snapshot thing into the scope.  
> > > >   
> > > > >
> > > > > A second use case is to support "gold image" type deployments, i.e.   
> > > > > where you create one baseline single disk image and then clone it   
> > > > > multiple times to deploy lots of guests. I think this is usually a 
> > > > > "disk  
>   
> > > > > snapshot" type thing, but maybe it can be implemented as restoring a  
> > > > >  
> > > > > gold domain snapshot multiple times (e.g. for start of day 
> > > > > performance   
> > > > > reasons).   
> > > >   
> > > > As we initially discussed about the thing, disk snapshot thing can be  
> done  
> > > > be existing tools directly like qemu-img, vhd-util.  
> > >   
> > > I was reading this section as a more generic overview of snapshotting,  
> > > without reference to where/how things might ultimately be implemented.  
> > >   
> > > From a design point of view it would be useful to cover the various use  
> > > cases, even if the solution is that the user implements them using CLI  
> > > tools by hand (xl) or the toolstack does it for them internally  
> > > (libvirt).  
> > >   
> > > This way we can more clearly see the full picture, which allows us to  
> > > validate that we are making the right choices about what goes where.  
> >  
> > OK. I see. I think this user case is more like how to use the snapshot,  
> rather 
> > than how to implement snapshot. Right? 
>  
> Correct, what the user is actually trying to achieve with the 
> functionality. 
>  
> > 'Gold image' or 'Gold domain', the needed work is more like cloning disks. 
>  
> Yes, or resuming multiple times. 

I see. But IMO it doesn't need change in snapshot design and implementation.
Even resuming multiple times, they couldn't use the same image but duplicate
the image multiple times.

>  
> > > > > The third case, (which is similar to the first), is taking a disk   
> > > > > snapshot in order to be able to run you usual backup software on the  
> > > > >  
> > > > > snapshot (which is now unchanging, which is handy) and then deleting 
> > > > > the  
>   
> > > > > disk snapshot (this differs from the first case in which disk is 
> > > > > active   
>  
> > > > > after the snapshot, and due to the lack of the memory part).   
> > > >   
> > > > Sorry, I'm still not quite clear about what this user case wants to do. 
> > > >  
> > >   
> > > The user has an active domain which they want to backup, but backup  
> > > software often does not cope well if the data is changing under its  
> > > feet.  
> > >   
> > > So the users wants to take a snapshot of the domains disks while leaving  
> > > the domain running, so they can backup that static version of the disk  
> > > out of band from the VM itself (e.g. by attaching it to a separate  
> > > backup VM).  
> >  
> > Got it. So that's simply disk-only snapshot when domian is active. As you 
> > mentioned below, that needs guest agent to quiesce the disks. But currently 
> > xen hypervisor can't support that, right? 
>  
> I don't think that's relevant right now, let me explain: 
>  
> I think it's important to consider all the use cases for snapshotting, 
> not because I think they need to be implemented now but to make sure 
> that we don't make any design decisions now which would make it 
> *impossible* to implement it in the future (at least without API 
> changes). 
>  
> As a random example, w

Re: [Xen-devel] [PATCH] xen/blkfront: restart request queue when there is enough persistent_gnts_c

2015-01-11 Thread Bob Liu

On 01/09/2015 11:51 PM, Roger Pau Monné wrote:
> El 06/01/15 a les 14.19, Bob Liu ha escrit:
>> When there is no enough free grants, gnttab_alloc_grant_references()
>> will fail and block request queue will stop.
>> If the system is always lack of grants, blkif_restart_queue_callback() can't 
>> be
>> scheduled and block request queue can't be restart(block I/O hang).
>>
>> But when there are former requests complete, some grants may free to
>> persistent_gnts_c, we can give the request queue another chance to restart 
>> and
>> avoid block hang.
>>
>> Reported-by: Junxiao Bi 
>> Signed-off-by: Bob Liu 
>> ---
>>  drivers/block/xen-blkfront.c |   11 +++
>>  1 file changed, 11 insertions(+)
>>
>> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
>> index 2236c6f..dd30f99 100644
>> --- a/drivers/block/xen-blkfront.c
>> +++ b/drivers/block/xen-blkfront.c
>> @@ -1125,6 +1125,17 @@ static void blkif_completion(struct blk_shadow *s, 
>> struct blkfront_info *info,
>>  }
>>  }
>>  }
>> +
>> +/*
>> + * Request queue would be stopped if failed to alloc enough grants and
>> + * won't be restarted until gnttab_free_count >= info->callback->count.
>> + *
>> + * But there is another case, once we have enough persistent grants we
>> + * can try to restart the request queue instead of continue to wait for
>> + * 'gnttab_free_count'.
>> + */
>> +if (info->persistent_gnts_c >= info->callback.count)
>> +schedule_work(&info->work);
> 
> I guess I'm missing something here, but blkif_completion is called by
> blkif_interrupt, which in turn calls kick_pending_request_queues when
> finished, which IMHO should be enough to restart the processing of requests.
> 

You are right, sorry for the mistake.

The problem we met was a xenblock I/O hang.
Dumped data showed at that time info->persistent_gnt_c = 8, max_gref = 8
but block request queue was still stopped.
It's very hard to reproduce this issue, we only see it once.

I think there might be a race condition:

request A  request B:

   info->persistent_gnts_c < max_grefs
   and fail to alloc enough grants



interrupt happen, blkif_complte():
info->persistent_gnts_c++
kick_pending_request_queues()

stop block request queue
added to callback()

If the system don't have enough grants(but have enough persistent_gnts),
request queue would still hang.

-- 
Regards,
-Bob

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen/blkfront: restart request queue when there is enough persistent_gnts_c

2015-01-11 Thread Bob Liu

On 01/09/2015 11:51 PM, Roger Pau Monné wrote:
> El 06/01/15 a les 14.19, Bob Liu ha escrit:
>> When there is no enough free grants, gnttab_alloc_grant_references()
>> will fail and block request queue will stop.
>> If the system is always lack of grants, blkif_restart_queue_callback() can't 
>> be
>> scheduled and block request queue can't be restart(block I/O hang).
>>
>> But when there are former requests complete, some grants may free to
>> persistent_gnts_c, we can give the request queue another chance to restart 
>> and
>> avoid block hang.
>>
>> Reported-by: Junxiao Bi 
>> Signed-off-by: Bob Liu 
>> ---
>>  drivers/block/xen-blkfront.c |   11 +++
>>  1 file changed, 11 insertions(+)
>>
>> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
>> index 2236c6f..dd30f99 100644
>> --- a/drivers/block/xen-blkfront.c
>> +++ b/drivers/block/xen-blkfront.c
>> @@ -1125,6 +1125,17 @@ static void blkif_completion(struct blk_shadow *s, 
>> struct blkfront_info *info,
>>  }
>>  }
>>  }
>> +
>> +/*
>> + * Request queue would be stopped if failed to alloc enough grants and
>> + * won't be restarted until gnttab_free_count >= info->callback->count.
>> + *
>> + * But there is another case, once we have enough persistent grants we
>> + * can try to restart the request queue instead of continue to wait for
>> + * 'gnttab_free_count'.
>> + */
>> +if (info->persistent_gnts_c >= info->callback.count)
>> +schedule_work(&info->work);
> 
> I guess I'm missing something here, but blkif_completion is called by
> blkif_interrupt, which in turn calls kick_pending_request_queues when
> finished, which IMHO should be enough to restart the processing of requests.
> 

You are right, sorry for the mistake.

The problem we met was a xenblock I/O hang.
Dumped data showed at that time info->persistent_gnt_c = 8, max_gref = 8
but block request queue was still stopped.
It's very hard to reproduce this issue, we only see it once.

I think there might be a race condition:

request A  request B:

   info->persistent_gnts_c < max_grefs
   and fail to alloc enough grants



interrupt happen, blkif_complte():
info->persistent_gnts_c++
kick_pending_request_queues()

stop block request queue
added to callback list

If the system don't have enough grants(but have enough persistent_gnts),
request queue would still hang.

-- 
Regards,
-Bob

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel