Re: [PATCH] Add a kvm subtest -- pci_hotplug, which supports both Windows OS and Linux OS.
On 7/21/2009 9:11 AM, Yolkfull Chow wrote: Previously, I used 'create partition primary' to verify whether the disk could be formatted but always got an error: --- diskpart has encountered an error... --- And then I found the SCSI disk was added to Windows guest was read-only. So I changed the format command to be 'detail disk' temporarily. Interesting - how did that happen? Lets see your command line, and probably 'info block' from the monitor. Are you saying that hot-plugged drives are added as r/o? I am afraid yes. After pci_add (monitor command: pci_add pci_addr=auto storage file=/tmp/stg.qcow2,if=scsi) the SCSI block device, 'info block' will show the scsi0-hd0 device is 'ro=0' whereas when I 'create partition primary' on this selected disk in diskpart tool, error message will be raised that the disk is write protected. Well, that doesn't sound like the desired behavior. Work with the KVM developers on this. Hi Yaniv, following is the output from Windows guest: --- Microsoft DiskPart version 6.0.6001 Copyright (C) 1999-2007 Microsoft Corporation. On computer: WIN-Q18A9GP5ECI Disk 1 is now the selected disk. DiskPart has encountered an error: The media is write protected. See the System Event Log for more information. Have you ever seen this error during format newly added SCSI block device? The contents of my diskpart script file: --- select disk 1 online create partition primary exit --- I didn't use a script - nor have I ever hot-plugged a disk, but it does seem to happen to me as well now - the 2nd disk (the first is IDE) is indeed seems to be R/O. I'll look into it. Also, you can always add an already formatted drive. Just create a qcow drive in another instance, format it properly and use it. Any result with an already formatted drive? Y. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] kvm: Drop obsolete cpu_get/put in make_all_cpus_request
Marcelo Tosatti wrote: > Jan, > > This was suggested but we thought it might be safer to keep the > get_cpu/put_cpu pair in case -rt kernels require it (which might be > bullshit, but nobody verified). -rt stumbles over both patterns (that's why I stumbled over it in the first place: get_cpu disables preemption, but spin_lock is a sleeping lock under -rt) and actually requires requests_lock to become raw_spinlock_t. Reordering get_cpu and spin_lock would be another option, but not really a gain for both scenarios. So unless there is a way to make the whole critical section preemptible (thus migration-agnostic), I think we can micro-optimize it like this. Jan -- Siemens AG, Corporate Technology, CT SE 2 Corporate Competence Center Embedded Linux -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Add a kvm subtest -- pci_hotplug, which supports both Windows OS and Linux OS.
On Tue, Jul 21, 2009 at 10:45:31AM +0300, Yaniv Kaul wrote: > On 7/21/2009 9:11 AM, Yolkfull Chow wrote: > > >> >> Previously, I used 'create partition primary' to verify whether the disk >> could be formatted but always got an error: >> --- >> diskpart has encountered an error... >> --- >> And then I found the SCSI disk was added to Windows guest was read-only. >> So I changed the format command to be 'detail disk' temporarily. >> >> >> > Interesting - how did that happen? Lets see your command line, and > probably 'info block' from the monitor. Are you saying that hot-plugged > drives are added as r/o? > > I am afraid yes. After pci_add (monitor command: pci_add pci_addr=auto storage file=/tmp/stg.qcow2,if=scsi) the SCSI block device, 'info block' will show the scsi0-hd0 device is 'ro=0' whereas when I 'create partition primary' on this selected disk in diskpart tool, error message will be raised that the disk is write protected. >>> Well, that doesn't sound like the desired behavior. Work with the KVM >>> developers on this. >>> >> Hi Yaniv, following is the output from Windows guest: >> >> --- >> Microsoft DiskPart version 6.0.6001 >> Copyright (C) 1999-2007 Microsoft Corporation. >> On computer: WIN-Q18A9GP5ECI >> >> Disk 1 is now the selected disk. >> >> DiskPart has encountered an error: The media is write protected. >> See the System Event Log for more information. >> >> Have you ever seen this error during format newly added SCSI block >> device? >> >> The contents of my diskpart script file: >> --- >> select disk 1 >> > > online > >> create partition primary >> exit >> --- >> >> > I didn't use a script - nor have I ever hot-plugged a disk, but it does > seem to happen to me as well now - the 2nd disk (the first is IDE) is > indeed seems to be R/O. > I'll look into it. > > >>> > Also, you can always add an already formatted drive. Just create a qcow > drive in another instance, format it properly and use it. > > > > > Any result with an already formatted drive? No, haven't got a chance to try that. :-( > Y. > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] KVM: VMX: Fix locking imbalance on emulation failure
We have to disable preemption and IRQs on every exit from handle_invalid_guest_state, otherwise we generate at least a preempt_disable imbalance. Signed-off-by: Jan Kiszka --- arch/x86/kvm/vmx.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 3a75db3..7a8d464 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -3335,7 +3335,7 @@ static void handle_invalid_guest_state(struct kvm_vcpu *vcpu, if (err != EMULATE_DONE) { kvm_report_emulation_failure(vcpu, "emulation failure"); - return; + break; } if (signal_pending(current)) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Add a kvm subtest -- pci_hotplug, which supports both Windows OS and Linux OS.
On Tue, Jul 21, 2009 at 10:45:31AM +0300, Yaniv Kaul wrote: > On 7/21/2009 9:11 AM, Yolkfull Chow wrote: > > >> >> Previously, I used 'create partition primary' to verify whether the disk >> could be formatted but always got an error: >> --- >> diskpart has encountered an error... >> --- >> And then I found the SCSI disk was added to Windows guest was read-only. >> So I changed the format command to be 'detail disk' temporarily. >> >> >> > Interesting - how did that happen? Lets see your command line, and > probably 'info block' from the monitor. Are you saying that hot-plugged > drives are added as r/o? > > I am afraid yes. After pci_add (monitor command: pci_add pci_addr=auto storage file=/tmp/stg.qcow2,if=scsi) the SCSI block device, 'info block' will show the scsi0-hd0 device is 'ro=0' whereas when I 'create partition primary' on this selected disk in diskpart tool, error message will be raised that the disk is write protected. >>> Well, that doesn't sound like the desired behavior. Work with the KVM >>> developers on this. >>> >> Hi Yaniv, following is the output from Windows guest: >> >> --- >> Microsoft DiskPart version 6.0.6001 >> Copyright (C) 1999-2007 Microsoft Corporation. >> On computer: WIN-Q18A9GP5ECI >> >> Disk 1 is now the selected disk. >> >> DiskPart has encountered an error: The media is write protected. >> See the System Event Log for more information. >> >> Have you ever seen this error during format newly added SCSI block >> device? >> >> The contents of my diskpart script file: >> --- >> select disk 1 >> > > online > >> create partition primary >> exit >> --- >> >> > I didn't use a script - nor have I ever hot-plugged a disk, but it does > seem to happen to me as well now - the 2nd disk (the first is IDE) is > indeed seems to be R/O. > I'll look into it. > > >>> > Also, you can always add an already formatted drive. Just create a qcow > drive in another instance, format it properly and use it. > > > > > Any result with an already formatted drive? I just tried, got same result -- write protected. Steps: 1. qemu-img create -f raw /tmp/stg.raw 1G 2. mkfs.vfat /tmp/stg.raw 3. hot_add the block device 4. diskpart to 'create partition primary' on the newly added disk Did I make any mistake? Or I need also try to hot_add a drive which has been installed an OS? > Y. > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Autotest] [KVM-AUTOTEST PATCH 12/17] KVM test: add simple timedrift test (mainly for Windows)
On 07/20/2009 06:07 PM, Michael Goldish wrote: 1) Log into a guest. 2) Take a time reading from the guest and host. 3) Run load on the guest and host. 4) Take a second time reading. 5) Stop the load and rest for a while. 6) Take a third time reading. 7) If the drift immediately after load is higher than a user- specified value (in %), fail. If the drift after the rest period is higher than a user-specified value, fail. Signed-off-by: Michael Goldish --- client/tests/kvm/kvm.py |1 + client/tests/kvm/kvm_tests.py | 161 - 2 files changed, 160 insertions(+), 2 deletions(-) diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py index b18b643..070e463 100644 --- a/client/tests/kvm/kvm.py +++ b/client/tests/kvm/kvm.py @@ -55,6 +55,7 @@ class kvm(test.test): "kvm_install": test_routine("kvm_install", "run_kvm_install"), "linux_s3": test_routine("kvm_tests", "run_linux_s3"), "stress_boot": test_routine("kvm_tests", "run_stress_boot"), +"timedrift":test_routine("kvm_tests", "run_timedrift"), } # Make it possible to import modules from the test's bindir diff --git a/client/tests/kvm/kvm_tests.py b/client/tests/kvm/kvm_tests.py index 5991aed..ca0b8c0 100644 --- a/client/tests/kvm/kvm_tests.py +++ b/client/tests/kvm/kvm_tests.py @@ -1,4 +1,4 @@ -import time, os, logging +import time, os, logging, re, commands from autotest_lib.client.common_lib import utils, error import kvm_utils, kvm_subprocess, ppm_utils, scan_results @@ -529,7 +529,6 @@ def run_stress_boot(tests, params, env): """ # boot the first vm vm = kvm_utils.env_get_vm(env, params.get("main_vm")) - if not vm: raise error.TestError("VM object not found in environment") if not vm.is_alive(): @@ -586,3 +585,161 @@ def run_stress_boot(tests, params, env): for se in sessions: se.close() logging.info("Total number booted: %d" % (num -1)) + + +def run_timedrift(test, params, env): +""" +Time drift test (mainly for Windows guests): + +1) Log into a guest. +2) Take a time reading from the guest and host. +3) Run load on the guest and host. +4) Take a second time reading. +5) Stop the load and rest for a while. +6) Take a third time reading. +7) If the drift immediately after load is higher than a user- +specified value (in %), fail. +If the drift after the rest period is higher than a user-specified value, +fail. + +@param test: KVM test object. +@param params: Dictionary with test parameters. +@param env: Dictionary with the test environment. +""" +vm = kvm_utils.env_get_vm(env, params.get("main_vm")) +if not vm: +raise error.TestError("VM object not found in environment") +if not vm.is_alive(): +raise error.TestError("VM seems to be dead; Test requires a living VM") + +logging.info("Waiting for guest to be up...") + +session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2) +if not session: +raise error.TestFail("Could not log into guest") + +logging.info("Logged in") + +# Collect test parameters: +# Command to run to get the current time +time_command = params.get("time_command") +# Filter which should match a string to be passed to time.strptime() +time_filter_re = params.get("time_filter_re") +# Time format for time.strptime() +time_format = params.get("time_format") +guest_load_command = params.get("guest_load_command") +guest_load_stop_command = params.get("guest_load_stop_command") +host_load_command = params.get("host_load_command") +guest_load_instances = int(params.get("guest_load_instances", "1")) +host_load_instances = int(params.get("host_load_instances", "0")) +# CPU affinity mask for taskset +cpu_mask = params.get("cpu_mask", "0xFF") +load_duration = float(params.get("load_duration", "30")) +rest_duration = float(params.get("rest_duration", "10")) +drift_threshold = float(params.get("drift_threshold", "200")) +drift_threshold_after_rest = float(params.get("drift_threshold_after_rest", + "200")) + +guest_load_sessions = [] +host_load_sessions = [] + +# Remember the VM's previous CPU affinity +prev_cpu_mask = commands.getoutput("taskset -p %s" % vm.get_pid()) +prev_cpu_mask = prev_cpu_mask.split()[-1] +# Set the VM's CPU affinity +commands.getoutput("taskset -p %s %s" % (cpu_mask, vm.get_pid())) Need to handle guest smp case where we want to pin the guest to several cpus. Cheers for the test! + +try: +# Get time before load +host_time_0 = time.time() +session.sendline(time_command) +(match, s) = session.read_up_to_prompt() +s = re.findall(time_filter_re, s)[0] +guest_time_0 = time.mkt
Re: [Autotest] [KVM-AUTOTEST PATCH 12/17] KVM test: add simple timedrift test (mainly for Windows)
- "Dor Laor" wrote: > On 07/20/2009 06:07 PM, Michael Goldish wrote: > > 1) Log into a guest. > > 2) Take a time reading from the guest and host. > > 3) Run load on the guest and host. > > 4) Take a second time reading. > > 5) Stop the load and rest for a while. > > 6) Take a third time reading. > > 7) If the drift immediately after load is higher than a user- > > specified value (in %), fail. > > If the drift after the rest period is higher than a user-specified > value, > > fail. > > > > Signed-off-by: Michael Goldish > > --- > > client/tests/kvm/kvm.py |1 + > > client/tests/kvm/kvm_tests.py | 161 > - > > 2 files changed, 160 insertions(+), 2 deletions(-) > > > > diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py > > index b18b643..070e463 100644 > > --- a/client/tests/kvm/kvm.py > > +++ b/client/tests/kvm/kvm.py > > @@ -55,6 +55,7 @@ class kvm(test.test): > > "kvm_install": test_routine("kvm_install", > "run_kvm_install"), > > "linux_s3": test_routine("kvm_tests", > "run_linux_s3"), > > "stress_boot": test_routine("kvm_tests", > "run_stress_boot"), > > +"timedrift":test_routine("kvm_tests", > "run_timedrift"), > > } > > > > # Make it possible to import modules from the test's > bindir > > diff --git a/client/tests/kvm/kvm_tests.py > b/client/tests/kvm/kvm_tests.py > > index 5991aed..ca0b8c0 100644 > > --- a/client/tests/kvm/kvm_tests.py > > +++ b/client/tests/kvm/kvm_tests.py > > @@ -1,4 +1,4 @@ > > -import time, os, logging > > +import time, os, logging, re, commands > > from autotest_lib.client.common_lib import utils, error > > import kvm_utils, kvm_subprocess, ppm_utils, scan_results > > > > @@ -529,7 +529,6 @@ def run_stress_boot(tests, params, env): > > """ > > # boot the first vm > > vm = kvm_utils.env_get_vm(env, params.get("main_vm")) > > - > > if not vm: > > raise error.TestError("VM object not found in > environment") > > if not vm.is_alive(): > > @@ -586,3 +585,161 @@ def run_stress_boot(tests, params, env): > > for se in sessions: > > se.close() > > logging.info("Total number booted: %d" % (num -1)) > > + > > + > > +def run_timedrift(test, params, env): > > +""" > > +Time drift test (mainly for Windows guests): > > + > > +1) Log into a guest. > > +2) Take a time reading from the guest and host. > > +3) Run load on the guest and host. > > +4) Take a second time reading. > > +5) Stop the load and rest for a while. > > +6) Take a third time reading. > > +7) If the drift immediately after load is higher than a user- > > +specified value (in %), fail. > > +If the drift after the rest period is higher than a > user-specified value, > > +fail. > > + > > +@param test: KVM test object. > > +@param params: Dictionary with test parameters. > > +@param env: Dictionary with the test environment. > > +""" > > +vm = kvm_utils.env_get_vm(env, params.get("main_vm")) > > +if not vm: > > +raise error.TestError("VM object not found in > environment") > > +if not vm.is_alive(): > > +raise error.TestError("VM seems to be dead; Test requires a > living VM") > > + > > +logging.info("Waiting for guest to be up...") > > + > > +session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2) > > +if not session: > > +raise error.TestFail("Could not log into guest") > > + > > +logging.info("Logged in") > > + > > +# Collect test parameters: > > +# Command to run to get the current time > > +time_command = params.get("time_command") > > +# Filter which should match a string to be passed to > time.strptime() > > +time_filter_re = params.get("time_filter_re") > > +# Time format for time.strptime() > > +time_format = params.get("time_format") > > +guest_load_command = params.get("guest_load_command") > > +guest_load_stop_command = > params.get("guest_load_stop_command") > > +host_load_command = params.get("host_load_command") > > +guest_load_instances = int(params.get("guest_load_instances", > "1")) > > +host_load_instances = int(params.get("host_load_instances", > "0")) > > +# CPU affinity mask for taskset > > +cpu_mask = params.get("cpu_mask", "0xFF") > > +load_duration = float(params.get("load_duration", "30")) > > +rest_duration = float(params.get("rest_duration", "10")) > > +drift_threshold = float(params.get("drift_threshold", "200")) > > +drift_threshold_after_rest = > float(params.get("drift_threshold_after_rest", > > + "200")) > > + > > +guest_load_sessions = [] > > +host_load_sessions = [] > > + > > +# Remember the VM's previous CPU affinity > > +prev_cpu_mask = commands.getoutput("taskset -p %s" % > vm.get_pid()) > >
Re: KVM crashes when using certain USB device
On Tue, Jul 21, 2009 at 1:23 AM, Jim Paris wrote: > G wrote: >> And thanks for your help and suggestions so far, btw. > > Here's a patch to try. I'm not familiar with the code, but it looks > like this buffer might be too small versus the packet lengths that > you're seeing, and similar definitions in hw/usb-uhci.c. > > -jim > > diff -urN kvm-87-orig/usb-linux.c kvm-87/usb-linux.c > --- kvm-87-orig/usb-linux.c 2009-06-23 09:32:38.0 -0400 > +++ kvm-87/usb-linux.c 2009-07-20 19:15:35.0 -0400 > @@ -115,7 +115,7 @@ > uint16_t offset; > uint8_t state; > struct usb_ctrlrequest req; > - uint8_t buffer[1024]; > + uint8_t buffer[2048]; > }; > > typedef struct USBHostDevice { Yes! Applying this patch makes the crash go away! Thank you! In addition to enabling DEBUG and applying your debug printout patches, I added a debug printout right above the memcpy()s in usb-linux.c, and found that the memcpy() in do_token_in() is called multiple time (since do_token_in() is called multiple times for the 1993 bytes usb packet I have in my usb sniff dumps), which I guess is what's causing a buffer overflow as the offset is pushed beyond 1024 bytes. But I'm not sure. I've looked at the code trying to figure out a better way to solve this, now that the problem spot has been found. To me it seems that malloc()ing and, when the need arises (the large 1993 bytes packets I'm seeing), realloc()ing the buffer, instead of using a statically sized buffer, would be the best solution. However, I cannot find a suitable place to do this, so in the meantime I'll use your patch, although I do hope the kvm developers will implement a more stable/reliable malloc()/realloc() solution in the future. 1993 bytes isn't far from the 2048 bytes limit, and it seems to me that there are more places in the usb code where statically sized buffer are used which could lead to more problems of this kind. One could of course redefine all buffers to be 8192 bytes instead, but that would just be a false sense of security, and perhaps some buffers need to be of a particular size to conform to the USB specification... The differences between the usb code in kvm-72 (which works without a patch) and kvm-87 are too big for me to try to find out why it works in kvm-72. Anyways, I'm happy. Once again, thanks. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Autotest] [KVM-AUTOTEST PATCH 12/17] KVM test: add simple timedrift test (mainly for Windows)
On 07/21/2009 12:37 PM, Michael Goldish wrote: - "Dor Laor" wrote: On 07/20/2009 06:07 PM, Michael Goldish wrote: 1) Log into a guest. 2) Take a time reading from the guest and host. 3) Run load on the guest and host. 4) Take a second time reading. 5) Stop the load and rest for a while. 6) Take a third time reading. 7) If the drift immediately after load is higher than a user- specified value (in %), fail. If the drift after the rest period is higher than a user-specified value, fail. Signed-off-by: Michael Goldish --- client/tests/kvm/kvm.py |1 + client/tests/kvm/kvm_tests.py | 161 - 2 files changed, 160 insertions(+), 2 deletions(-) diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py index b18b643..070e463 100644 --- a/client/tests/kvm/kvm.py +++ b/client/tests/kvm/kvm.py @@ -55,6 +55,7 @@ class kvm(test.test): "kvm_install": test_routine("kvm_install", "run_kvm_install"), "linux_s3": test_routine("kvm_tests", "run_linux_s3"), "stress_boot": test_routine("kvm_tests", "run_stress_boot"), +"timedrift":test_routine("kvm_tests", "run_timedrift"), } # Make it possible to import modules from the test's bindir diff --git a/client/tests/kvm/kvm_tests.py b/client/tests/kvm/kvm_tests.py index 5991aed..ca0b8c0 100644 --- a/client/tests/kvm/kvm_tests.py +++ b/client/tests/kvm/kvm_tests.py @@ -1,4 +1,4 @@ -import time, os, logging +import time, os, logging, re, commands from autotest_lib.client.common_lib import utils, error import kvm_utils, kvm_subprocess, ppm_utils, scan_results @@ -529,7 +529,6 @@ def run_stress_boot(tests, params, env): """ # boot the first vm vm = kvm_utils.env_get_vm(env, params.get("main_vm")) - if not vm: raise error.TestError("VM object not found in environment") if not vm.is_alive(): @@ -586,3 +585,161 @@ def run_stress_boot(tests, params, env): for se in sessions: se.close() logging.info("Total number booted: %d" % (num -1)) + + +def run_timedrift(test, params, env): +""" +Time drift test (mainly for Windows guests): + +1) Log into a guest. +2) Take a time reading from the guest and host. +3) Run load on the guest and host. +4) Take a second time reading. +5) Stop the load and rest for a while. +6) Take a third time reading. +7) If the drift immediately after load is higher than a user- +specified value (in %), fail. +If the drift after the rest period is higher than a user-specified value, +fail. + +@param test: KVM test object. +@param params: Dictionary with test parameters. +@param env: Dictionary with the test environment. +""" +vm = kvm_utils.env_get_vm(env, params.get("main_vm")) +if not vm: +raise error.TestError("VM object not found in environment") +if not vm.is_alive(): +raise error.TestError("VM seems to be dead; Test requires a living VM") + +logging.info("Waiting for guest to be up...") + +session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2) +if not session: +raise error.TestFail("Could not log into guest") + +logging.info("Logged in") + +# Collect test parameters: +# Command to run to get the current time +time_command = params.get("time_command") +# Filter which should match a string to be passed to time.strptime() +time_filter_re = params.get("time_filter_re") +# Time format for time.strptime() +time_format = params.get("time_format") +guest_load_command = params.get("guest_load_command") +guest_load_stop_command = params.get("guest_load_stop_command") +host_load_command = params.get("host_load_command") +guest_load_instances = int(params.get("guest_load_instances", "1")) +host_load_instances = int(params.get("host_load_instances", "0")) +# CPU affinity mask for taskset +cpu_mask = params.get("cpu_mask", "0xFF") +load_duration = float(params.get("load_duration", "30")) +rest_duration = float(params.get("rest_duration", "10")) +drift_threshold = float(params.get("drift_threshold", "200")) +drift_threshold_after_rest = float(params.get("drift_threshold_after_rest", + "200")) + +guest_load_sessions = [] +host_load_sessions = [] + +# Remember the VM's previous CPU affinity +prev_cpu_mask = commands.getoutput("taskset -p %s" % vm.get_pid()) +prev_cpu_mask = prev_cpu_mask.split()[-1] +# Set the VM's CPU affinity +commands.getoutput("taskset -p %s %s" % (cpu_mask, vm.get_pid())) Need to handle guest smp case where we want to pin the guest to several cpus. Cheers for the test! cpu_mask is user-specified. If the user specifies 5, the VM will be pinned to CPUs 1 and 3. In smp tests we can set cpu_m
Re: [Autotest] [KVM-AUTOTEST PATCH 15/17] KVM test: add timedrift test to kvm_tests.cfg.sample
On 07/20/2009 06:07 PM, Michael Goldish wrote: Currently the test will only run on Windows. It should be able to run on Linux just as well, but if I understand correctly, testing time drift on Linux is less interesting. Linux is interesting too. The problem is more visible on windows since it uses 1000hz frequency when it plays multimedia. It makes timer irq injection harder. Does the test fail without the rtc-td-hack? Also make some tiny cosmetic changes (spacing), and move the stress_boot test before the shutdown test (shutdown should be last). Signed-off-by: Michael Goldish --- client/tests/kvm/kvm_tests.cfg.sample | 46 ++-- 1 files changed, 37 insertions(+), 9 deletions(-) diff --git a/client/tests/kvm/kvm_tests.cfg.sample b/client/tests/kvm/kvm_tests.cfg.sample index 1288952..2d75a66 100644 --- a/client/tests/kvm/kvm_tests.cfg.sample +++ b/client/tests/kvm/kvm_tests.cfg.sample @@ -92,20 +92,33 @@ variants: test_name = disktest test_control_file = disktest.control -- linux_s3: install setup +- linux_s3: install setup type = linux_s3 -- shutdown: install setup +- timedrift:install setup +type = timedrift +extra_params += " -rtc-td-hack" +# Pin the VM and host load to CPU #0 +cpu_mask = 0x1 +# Set the load and rest durations +load_duration = 20 +rest_duration = 20 +# Fail if the drift after load is higher than 50% +drift_threshold = 50 +# Fail if the drift after the rest period is higher than 10% +drift_threshold_after_rest = 10 + +- stress_boot: install setup +type = stress_boot +max_vms = 5 +alive_test_cmd = ps aux + +- shutdown: install setup type = shutdown kill_vm = yes kill_vm_gracefully = no -- stress_boot: -type = stress_boot -max_vms = 5 -alive_test_cmd = ps aux - # NICs variants: - @rtl8139: @@ -121,6 +134,7 @@ variants: variants: # Linux section - @Linux: +no timedrift cmd_shutdown = shutdown -h now cmd_reboot = shutdown -r now ssh_status_test_command = echo $? @@ -303,8 +317,6 @@ variants: md5sum=bf4635e4a4bd3b43838e72bc8c329d55 md5sum_1m=18ecd37b639109f1b2af05cfb57dfeaf - - # Windows section - @Windows: no autotest @@ -318,6 +330,21 @@ variants: migration_test_command = ver&& vol stress_boot: alive_test_cmd = systeminfo +timedrift: +# For this to work, the ISO should contain vlc (vlc.exe) and a video (ED_1024.avi) +cdrom = windows/vlc.iso +time_command = "echo TIME: %date% %time%" +time_filter_re = "(?<=TIME: \w\w\w ).{19}(?=\.\d\d)" +time_format = "%m/%d/%Y %H:%M:%S" +guest_load_command = 'cmd /c "d:\vlc -f --loop --no-qt-privacy-ask --no-qt-system-tray d:\ED_1024.avi"' +# Alternative guest load: +#guest_load_command = "(dir /s&& dir /s&& dir /s&& dir /s)> nul" +guest_load_stop_command = "taskkill /F /IM vlc.exe" +guest_load_instances = 2 +host_load_command = "bzip2 -c --best /dev/urandom> /dev/null" +# Alternative host load: +#host_load_command = "dd if=/dev/urandom of=/dev/null" +host_load_instances = 8 variants: - Win2000: @@ -582,5 +609,6 @@ variants: only qcow2.*ide.*default.*up.*Ubuntu-8.10-server.*(autotest.sleeptest) only rtl8139 + # Choose your test list only fc8_quick -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Autotest] [KVM-AUTOTEST PATCH 15/17] KVM test: add timedrift test to kvm_tests.cfg.sample
- "Dor Laor" wrote: > On 07/20/2009 06:07 PM, Michael Goldish wrote: > > Currently the test will only run on Windows. > > It should be able to run on Linux just as well, but if I understand > correctly, > > testing time drift on Linux is less interesting. > > Linux is interesting too. The problem is more visible on windows > since > it uses 1000hz frequency when it plays multimedia. It makes timer irq > injection harder. If I understand correctly, most Linuxes don't use RTC at all (please correct me if I'm wrong). This means there's no point in testing time drift on Linux, because even if there's any drift, it won't get corrected by -rtc-td-hack. And it's pretty hard to get a drift on RHEL-3.9 for example -- at least it was very hard for me. > Does the test fail without the rtc-td-hack? The problem with the test is that it's hard to decide on the drift thresholds for failure, because the more load you use, the larger the drift you get. -rtc-td-hack makes it harder to get a drift -- you need to add more load in order to get the same drift. However, in my experiments, when I got a drift, it was not corrected when the load stopped. If I get 5 seconds of drift during load, and then I stop the load and wait, the drift remains 5 seconds, which makes me think I may be doing something wrong. I never got to see the cool fast rotating clock either. Another weird thing I noticed was that the drift was much larger when the VM and load were NOT pinned to a single CPU. It could cause a leap from 5% to 30%. (my office desktop has 2 CPUs.) I used Vista with kvm-85 I think. I tried both video load (VLC) and dir /s. Even if I did something wrong, I hope the test itself is OK, because its behavior is completely configurable. > > > > Also make some tiny cosmetic changes (spacing), and move the > stress_boot test > > before the shutdown test (shutdown should be last). > > > > Signed-off-by: Michael Goldish > > --- > > client/tests/kvm/kvm_tests.cfg.sample | 46 > ++-- > > 1 files changed, 37 insertions(+), 9 deletions(-) > > > > diff --git a/client/tests/kvm/kvm_tests.cfg.sample > b/client/tests/kvm/kvm_tests.cfg.sample > > index 1288952..2d75a66 100644 > > --- a/client/tests/kvm/kvm_tests.cfg.sample > > +++ b/client/tests/kvm/kvm_tests.cfg.sample > > @@ -92,20 +92,33 @@ variants: > > test_name = disktest > > test_control_file = disktest.control > > > > -- linux_s3: install setup > > +- linux_s3: install setup > > type = linux_s3 > > > > -- shutdown: install setup > > +- timedrift:install setup > > +type = timedrift > > +extra_params += " -rtc-td-hack" > > +# Pin the VM and host load to CPU #0 > > +cpu_mask = 0x1 > > +# Set the load and rest durations > > +load_duration = 20 > > +rest_duration = 20 > > +# Fail if the drift after load is higher than 50% > > +drift_threshold = 50 > > +# Fail if the drift after the rest period is higher than > 10% > > +drift_threshold_after_rest = 10 > > + > > +- stress_boot: install setup > > +type = stress_boot > > +max_vms = 5 > > +alive_test_cmd = ps aux > > + > > +- shutdown: install setup > > type = shutdown > > kill_vm = yes > > kill_vm_gracefully = no > > > > > > -- stress_boot: > > -type = stress_boot > > -max_vms = 5 > > -alive_test_cmd = ps aux > > - > > # NICs > > variants: > > - @rtl8139: > > @@ -121,6 +134,7 @@ variants: > > variants: > > # Linux section > > - @Linux: > > +no timedrift > > cmd_shutdown = shutdown -h now > > cmd_reboot = shutdown -r now > > ssh_status_test_command = echo $? > > @@ -303,8 +317,6 @@ variants: > > > md5sum=bf4635e4a4bd3b43838e72bc8c329d55 > > > md5sum_1m=18ecd37b639109f1b2af05cfb57dfeaf > > > > - > > - > > # Windows section > > - @Windows: > > no autotest > > @@ -318,6 +330,21 @@ variants: > > migration_test_command = ver&& vol > > stress_boot: > > alive_test_cmd = systeminfo > > +timedrift: > > +# For this to work, the ISO should contain vlc > (vlc.exe) and a video (ED_1024.avi) > > +cdrom = windows/vlc.iso > > +time_command = "echo TIME: %date% %time%" > > +time_filter_re = "(?<=TIME: \w\w\w ).{19}(?=\.\d\d)" > > +time_format = "%m/%d/%Y %H:%M:%S" > > +guest_load_command = 'cmd /c "d:\vlc -f --loop > --no-qt-privacy-ask --no-qt-system-tray d:\ED_1024.avi"' > > +# Alternative guest load: > > +#guest_load_command = "(dir /s&& dir /s&& dir /s&& > dir /s)> nul" > > +guest_load_stop_command = "taskkill /F /IM vlc.exe" > > +guest_load_instances =
[RFC] KVM test: Refactoring the kvm control file and the config file
Currently we have our kvm test control file and configuration file, having them split like this makes it harder for users to edit it, let's say, using the web frontend. So it might be good to merge the control file and the config file, and make a refactor on the control file code. Do you think this would be a valid approach? Any comments are welcome. Lucas -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 6/9] remove kvm_in* functions
On Tue, Jul 21, 2009 at 09:11:06AM +0300, Gleb Natapov wrote: > On Mon, Jul 20, 2009 at 07:10:13PM -0400, Glauber Costa wrote: > > We can use plain qemu's here, and save a couple of lines/complexity. > > I'm leaving outb for later, because the SMM thing makes it a little bit > > less trivial. > > > I think you can remove all this black SMM magic from kvm_outb(). It is > handled in acpi.c now. Just booted WindowsXP with all this crap deleted. Cool, I figured it out, but as I said, it is much less trivial. If you agree with removing it, a new patch will follow after this series is in > > > Signed-off-by: Glauber Costa > > --- > > qemu-kvm.c | 25 - > > 1 files changed, 4 insertions(+), 21 deletions(-) > > > > diff --git a/qemu-kvm.c b/qemu-kvm.c > > index 26cac25..58d5de2 100644 > > --- a/qemu-kvm.c > > +++ b/qemu-kvm.c > > @@ -97,24 +97,6 @@ static int kvm_debug(void *opaque, void *data, > > } > > #endif > > > > -static int kvm_inb(void *opaque, uint16_t addr, uint8_t *data) > > -{ > > -*data = cpu_inb(0, addr); > > -return 0; > > -} > > - > > -static int kvm_inw(void *opaque, uint16_t addr, uint16_t *data) > > -{ > > -*data = cpu_inw(0, addr); > > -return 0; > > -} > > - > > -static int kvm_inl(void *opaque, uint16_t addr, uint32_t *data) > > -{ > > -*data = cpu_inl(0, addr); > > -return 0; > > -} > > - > > #define PM_IO_BASE 0xb000 > > > > static int kvm_outb(void *opaque, uint16_t addr, uint8_t data) > > @@ -855,15 +837,16 @@ static int handle_io(kvm_vcpu_context_t vcpu) > > for (i = 0; i < run->io.count; ++i) { > > switch (run->io.direction) { > > case KVM_EXIT_IO_IN: > > + r = 0; > > switch (run->io.size) { > > case 1: > > - r = kvm_inb(kvm->opaque, addr, p); > > + *(uint8_t *)p = cpu_inb(kvm->opaque, addr); > > break; > > case 2: > > - r = kvm_inw(kvm->opaque, addr, p); > > + *(uint16_t *)p = cpu_inw(kvm->opaque, addr); > > break; > > case 4: > > - r = kvm_inl(kvm->opaque, addr, p); > > + *(uint32_t *)p = cpu_inl(kvm->opaque, addr); > > break; > > default: > > fprintf(stderr, "bad I/O size %d\n", > > run->io.size); > > -- > > 1.6.2.2 > > > > -- > > To unsubscribe from this list: send the line "unsubscribe kvm" in > > the body of a message to majord...@vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > -- > Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 6/9] remove kvm_in* functions
On Tue, Jul 21, 2009 at 09:23:24AM -0300, Glauber Costa wrote: > On Tue, Jul 21, 2009 at 09:11:06AM +0300, Gleb Natapov wrote: > > On Mon, Jul 20, 2009 at 07:10:13PM -0400, Glauber Costa wrote: > > > We can use plain qemu's here, and save a couple of lines/complexity. > > > I'm leaving outb for later, because the SMM thing makes it a little bit > > > less trivial. > > > > > I think you can remove all this black SMM magic from kvm_outb(). It is > > handled in acpi.c now. Just booted WindowsXP with all this crap deleted. > Cool, I figured it out, but as I said, it is much less trivial. > If you agree with removing it, a new patch will follow after this series is > in > I agree, but I am not the maintainer :) -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [RFC] KVM test: Refactoring the kvm control file and the config file
- "Lucas Meneghel Rodrigues" wrote: > Currently we have our kvm test control file and configuration file, > having them split like this makes it harder for users to edit it, > let's > say, using the web frontend. > > So it might be good to merge the control file and the config file, > and > make a refactor on the control file code. Do you think this would be > a valid approach? Any comments are welcome. > > Lucas What exactly do you mean by merge? Embed the entire config file in the control file as a python string? A few comments: 1. The bulk of the config file usually doesn't need to be modified from the web frontend, IMO. It actually doesn't need to be modified very often -- once everything is defined, only minor changes are required. 2. Changes to the config can be made in the control file rather easily using kvm_config methods that are implemented but not currently used. Instead of the short form: list = kvm_config.config(filename).get_list() we can use: cfg = kvm_config.config(filename) # parse any one-liner like this: cfg.parse_string("only nightly") # parse anything the parser understands like this: cfg.parse_string(""" install: steps = blah foo = bar only qcow2.*Windows """) # we can parse several times and the effect is cumulative cfg.parse_string(""" variants: - foo: only scsi - bar: only WinVista.32 variants: - 1: - 2: """) # we can also parse additional files: cfg.parse_file("windows_cdkeys.cfg") # finally, get the resulting list list = cfg.get_list() 3. We may want to consider something in between having the control and config completely separated (what we have today), and having them both in the same file. For example, we can define the test sets (nightly, weekly, fc8_quick, custom) in the config file, and select the test set (e.g. "only nightly") in the control file by convention. Alternatively we can omit the test sets from the config file, and just define a single test set (the one we'll be using) in the control file, or define several test sets in the control file, and select one of them. We can actually do both things at the same time, by defining the test sets in the config file, and defining a "full" test set among them (I think it's already there), which doesn't modify anything. If we want to use a standard test set from the config file, we can do "only nightly" in the control, and if we want to use a custom test set, we can do: cfg.parse_string(""" only full # define the test set below (no need for variants) only RHEL only qcow2 only autotest.dbench """) 4. It could be a good idea to make a "windows_cdkeys.cfg" file, that contains mainly single-line exceptions, such as: WinXP.32: cdkey = REPLACE_ME WinXP.64: cdkey = REPLACE_ME Win2003.32: cdkey = REPLACE_ME ... The real cdkeys should be entered by the user. Then the file will be parsed after kvm_tests.cfg, using the parse_file() method (in the control). This way the user won't have to enter the cdkeys into the long config file every time it gets replaced by a newer version. The cdkeys file won't be replaced because it's specific to the test environment (we'll only supply a sample like we do with kvm_tests.cfg). Maybe we can generalize this idea and call the file local_prefs.cfg, and decide that the file should contain any environment-specific changes that the user wants to make to the config. The file will contain mainly exceptions (single or multi-line). But I'm not sure there are many environment specific things other than cdkeys, so maybe this isn't necessary. Let me know what you think. Thanks, Michael -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] rev7: support colon in filenames
Ram Pai schrieb: > Problem: It is impossible to feed filenames with the character colon because > qemu interprets such names as a protocol. For example filename scsi:0, is > interpreted as a protocol by name "scsi". > > This patch allows user to espace colon characters. For example the above > filename can now be expressed either as 'scsi\:0' or as file:scsi:0 > > anything following the "file:" tag is interpreted verbatin. However if "file:" > tag is omitted then any colon characters in the string must be escaped using > backslash. > > Here are couple of examples: > > scsi\:0\:abc is a local file scsi:0:abc > http\://myweb is a local file by name http://myweb > file:scsi:0:abc is a local file scsi:0:abc > file:http://myweb is a local file by name http://myweb > > fat:c:\path\to\dir\:floppy\: is a fat file by name \path\to\dir:floppy: > NOTE:The above example cannot be expressed using the "file:" protocol. > > > Changelog w.r.t to iteration 0: >1) removes flexibility added to nbd semantics eg -- nbd:\:: >2) introduce the file: protocol to indicate local file > > Changelog w.r.t to iteration 1: >1) generically handles 'file:' protocol in find_protocol >2) centralizes 'filename' pruning before the call to open(). >3) fixes buffer overflow seen in fill_token() >4) adheres to codying style >5) patch against upstream qemu tree > > Changelog w.r.t to iteration 2: >1) really really fixes buffer overflow seen in > fill_token() (if not, beat me :) >2) the centralized 'filename' pruning had a side effect with > qcow2 files and other files. Fixed it. _open() is back. > > Changelog w.r.t to iteration 3: >1) support added to raw-win32.c (i do not have the setup to > test this change. Request help with testing) >2) ability to espace option-values containing commas using > backslashes > eg file=file:abc,, can also be expressed as file=file:abc\, > where 'abc,' is a filename >3) fixes a bug (reported by Jan Kiszka) w.r.t support for -snapshot >4) renamed _open() to qemu_open() and removed dependency on PATH_MAX > > Changelog w.r.t to iteration 4: >1) applies to upstream qemu tree > > Changelog w.r.t to iteration 5: >1) fixed a issue with backing_filename for qcow2 files, > reported by Jamie Lokier. >2) fixed a compile issue with win32-raw.c reported by Blue Swirl. > (I do not have the setup to test win32 changes. >Request help with testing) > > Changelog w.r.t to iteration 6: >1) fixed all the issues found with win32. > a) changed the call to strnlen() to qemu_strlen() in cutils.c > b) fixed the call to CreateFile() in qemu_CreateFile() > > Signed-off-by: Ram Pai > > > block.c | 38 - > block/raw-posix.c | 15 > block/raw-win32.c | 26 -- > block/vvfat.c | 97 +++- > cutils.c | 46 + > qemu-common.h |2 + > qemu-option.c |8 - > 7 files changed, 195 insertions(+), 37 deletions(-) > > diff --git a/block.c b/block.c > index 39f726c..da6eaf7 100644 > --- a/block.c > +++ b/block.c > @@ -225,7 +225,6 @@ static BlockDriver *find_protocol(const char *filename) > { > BlockDriver *drv1; > char protocol[128]; > -int len; > const char *p; > > #ifdef _WIN32 > @@ -233,14 +232,9 @@ static BlockDriver *find_protocol(const char *filename) > is_windows_drive_prefix(filename)) > return bdrv_find_format("raw"); > #endif > -p = strchr(filename, ':'); > -if (!p) > +p = prune_strcpy(protocol, sizeof(protocol), filename, ':'); > +if (*p != ':') > return bdrv_find_format("raw"); > -len = p - filename; > -if (len > sizeof(protocol) - 1) > -len = sizeof(protocol) - 1; > -memcpy(protocol, filename, len); > -protocol[len] = '\0'; > for(drv1 = first_drv; drv1 != NULL; drv1 = drv1->next) { > if (drv1->protocol_name && > !strcmp(drv1->protocol_name, protocol)) > @@ -331,7 +325,6 @@ int bdrv_open2(BlockDriverState *bs, const char > *filename, int flags, > { > int ret, open_flags; > char tmp_filename[PATH_MAX]; > -char backing_filename[PATH_MAX]; > > bs->read_only = 0; > bs->is_temporary = 0; > @@ -343,7 +336,6 @@ int bdrv_open2(BlockDriverState *bs, const char > *filename, int flags, > if (flags & BDRV_O_SNAPSHOT) { > BlockDriverState *bs1; > int64_t total_size; > -int is_protocol = 0; > BlockDriver *bdrv_qcow2; > QEMUOptionParameter *options; > > @@ -359,25 +351,15 @@ int bdrv_open2(BlockDriverState *bs, const char > *filename, int flags, > } > total_size = bdrv_getlength(bs1) >> SECTOR_BITS; > > -if (bs1->drv && bs1->drv->protocol_name) > -
Re: [PATCH] kvm: Drop obsolete cpu_get/put in make_all_cpus_request
Hi, I suggested this too first time around when I've seen the patch but they reminded it's needed to make life easier to preempt-rt... On Mon, Jul 20, 2009 at 11:30:12AM +0200, Jan Kiszka wrote: > spin_lock disables preemption, so we can simply read the current cpu. > > Signed-off-by: Jan Kiszka > --- > > virt/kvm/kvm_main.c |3 +-- > 1 files changed, 1 insertions(+), 2 deletions(-) > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 7cd1c10..98e4ec8 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -741,8 +741,8 @@ static bool make_all_cpus_request(struct kvm *kvm, > unsigned int req) > if (alloc_cpumask_var(&cpus, GFP_ATOMIC)) > cpumask_clear(cpus); > > - me = get_cpu(); > spin_lock(&kvm->requests_lock); > + me = smp_processor_id(); > kvm_for_each_vcpu(i, vcpu, kvm) { > if (test_and_set_bit(req, &vcpu->requests)) > continue; > @@ -757,7 +757,6 @@ static bool make_all_cpus_request(struct kvm *kvm, > unsigned int req) > else > called = false; > spin_unlock(&kvm->requests_lock); > - put_cpu(); > free_cpumask_var(cpus); > return called; > } > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majord...@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [RFC] KVM test: Refactoring the kvm control file and the config file
* Michael Goldish [2009-07-21 07:38]: > > - "Lucas Meneghel Rodrigues" wrote: > > > Currently we have our kvm test control file and configuration file, > > having them split like this makes it harder for users to edit it, > > let's > > say, using the web frontend. > > > > So it might be good to merge the control file and the config file, > > and > > make a refactor on the control file code. Do you think this would be > > a valid approach? Any comments are welcome. > > > > Lucas > > What exactly do you mean by merge? Embed the entire config file in > the control file as a python string? > > A few comments: > > 1. The bulk of the config file usually doesn't need to be modified > from the web frontend, IMO. It actually doesn't need to be modified > very often -- once everything is defined, only minor changes are > required. Agreed. In fact, I have a kvm_tests.common file that has all of the guest and parameter definitions, and then I have separate "test" files that are appended to the common file to create a kvm_tests.cfg for the specific tests I want to run. > > 2. Changes to the config can be made in the control file rather easily > using kvm_config methods that are implemented but not currently used. > Instead of the short form: > > list = kvm_config.config(filename).get_list() > > we can use: > > cfg = kvm_config.config(filename) > > # parse any one-liner like this: > cfg.parse_string("only nightly") > > # parse anything the parser understands like this: > cfg.parse_string(""" > install: > steps = blah > foo = bar > only qcow2.*Windows > """) > > # we can parse several times and the effect is cumulative > cfg.parse_string(""" > variants: > - foo: > only scsi > - bar: > only WinVista.32 > variants: > - 1: > - 2: > """) > > # we can also parse additional files: > cfg.parse_file("windows_cdkeys.cfg") > > # finally, get the resulting list > list = cfg.get_list() > > 3. We may want to consider something in between having the control and > config completely separated (what we have today), and having them both > in the same file. For example, we can define the test sets (nightly, > weekly, fc8_quick, custom) in the config file, and select the test set > (e.g. "only nightly") in the control file by convention. Alternatively > we can omit the test sets from the config file, and just define a single > test set (the one we'll be using) in the control file, or define several > test sets in the control file, and select one of them. Yeah, this models what I'm doing today; common config file, and then a separate test selector mechanism. I'd actually prefer to not have to touch the control file at all since it already has a bunch of logic and other info in it; and just be able to specify my test selector file. I think your above examples imply we can do this with the code today: cfg = kvm_config.config(kvm_tests_common) # parse any one-liner like this: cfg.parse_string("only nightly") > We can actually do both things at the same time, by defining the test > sets in the config file, and defining a "full" test set among them (I > think it's already there), which doesn't modify anything. If we want to > use a standard test set from the config file, we can do "only nightly" > in the control, and if we want to use a custom test set, we can do: > cfg.parse_string(""" > only full > # define the test set below (no need for variants) > only RHEL > only qcow2 > only autotest.dbench > """) Yep. > > 4. It could be a good idea to make a "windows_cdkeys.cfg" file, that > contains mainly single-line exceptions, such as: > WinXP.32: cdkey = REPLACE_ME > WinXP.64: cdkey = REPLACE_ME > Win2003.32: cdkey = REPLACE_ME > ... > The real cdkeys should be entered by the user. Then the file will be > parsed after kvm_tests.cfg, using the parse_file() method (in the > control). This way the user won't have to enter the cdkeys into the > long config file every time it gets replaced by a newer version. The > cdkeys file won't be replaced because it's specific to the test > environment (we'll only supply a sample like we do with kvm_tests.cfg). Yep, I like that as well. > > Maybe we can generalize this idea and call the file local_prefs.cfg, > and decide that the file should contain any environment-specific > changes that the user wants to make to the config. The file will > contain mainly exceptions (single or multi-line). But I'm not sure > there are many environment specific things other than cdkeys, so maybe > this isn't necessary. > > > Let me know what you think. I think have a common kvm_tests.cfg file that is automatically loaded along with the additional one-liner/custom test selector mechanism go a long way to providing what Lucas was asking for. -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ry...@us.ibm.com -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message
Re: [RFC] KVM test: Refactoring the kvm control file and the config file
- "Ryan Harper" wrote: > * Michael Goldish [2009-07-21 07:38]: > > > > - "Lucas Meneghel Rodrigues" wrote: > > > > > Currently we have our kvm test control file and configuration > file, > > > having them split like this makes it harder for users to edit it, > > > let's > > > say, using the web frontend. > > > > > > So it might be good to merge the control file and the config > file, > > > and > > > make a refactor on the control file code. Do you think this would > be > > > a valid approach? Any comments are welcome. > > > > > > Lucas > > > > What exactly do you mean by merge? Embed the entire config file in > > the control file as a python string? > > > > A few comments: > > > > 1. The bulk of the config file usually doesn't need to be modified > > from the web frontend, IMO. It actually doesn't need to be modified > > very often -- once everything is defined, only minor changes are > > required. > > Agreed. In fact, I have a kvm_tests.common file that has all of the > guest and parameter definitions, and then I have separate "test" > files > that are appended to the common file to create a kvm_tests.cfg for > the > specific tests I want to run. > > > > > 2. Changes to the config can be made in the control file rather > easily > > using kvm_config methods that are implemented but not currently > used. > > Instead of the short form: > > > > list = kvm_config.config(filename).get_list() > > > > we can use: > > > > cfg = kvm_config.config(filename) > > > > # parse any one-liner like this: > > cfg.parse_string("only nightly") > > > > # parse anything the parser understands like this: > > cfg.parse_string(""" > > install: > > steps = blah > > foo = bar > > only qcow2.*Windows > > """) > > > > # we can parse several times and the effect is cumulative > > cfg.parse_string(""" > > variants: > > - foo: > > only scsi > > - bar: > > only WinVista.32 > > variants: > > - 1: > > - 2: > > """) > > > > # we can also parse additional files: > > cfg.parse_file("windows_cdkeys.cfg") > > > > # finally, get the resulting list > > list = cfg.get_list() > > > > 3. We may want to consider something in between having the control > and > > config completely separated (what we have today), and having them > both > > in the same file. For example, we can define the test sets > (nightly, > > weekly, fc8_quick, custom) in the config file, and select the test > set > > (e.g. "only nightly") in the control file by convention. > Alternatively > > we can omit the test sets from the config file, and just define a > single > > test set (the one we'll be using) in the control file, or define > several > > test sets in the control file, and select one of them. > > Yeah, this models what I'm doing today; common config file, and then > a > separate test selector mechanism. I'd actually prefer to not have to > touch the control file at all since it already has a bunch of logic > and > other info in it; and just be able to specify my test selector file. If you want to avoid touching the control file altogether, you can put an 'include' statement at the end of kvm_tests.cfg: include my_custom_file.cfg ('include' jumps to another file, parses it, and then returns to the parent file.) Then you can make any modifications you want in my_custom_file.cfg, and never touch the control file or kvm_tests.cfg. Make sure the included file exists, otherwise the parser will raise an exception. So in total there are 3 ways to modify the config outside kvm_tests.cfg: - cfg.parse_string() in the control file (parses any string the parser understands) - cfg.parse_file() in the control file (parses a file) - 'include' in kvm_tests.cfg (parses a file) > I think your above examples imply we can do this with the code today: > > cfg = kvm_config.config(kvm_tests_common) > > # parse any one-liner like this: > cfg.parse_string("only nightly") Yes, this should certainly work, but make sure to also do list = cfg.get_list() when you're done parsing. > > We can actually do both things at the same time, by defining the > test > > sets in the config file, and defining a "full" test set among them > (I > > think it's already there), which doesn't modify anything. If we want > to > > use a standard test set from the config file, we can do "only > nightly" > > in the control, and if we want to use a custom test set, we can do: > > cfg.parse_string(""" > > only full > > # define the test set below (no need for variants) > > only RHEL > > only qcow2 > > only autotest.dbench > > """) > > Yep. > > > > > 4. It could be a good idea to make a "windows_cdkeys.cfg" file, > that > > contains mainly single-line exceptions, such as: > > WinXP.32: cdkey = REPLACE_ME > > WinXP.64: cdkey = REPLACE_ME > > Win2003.32: cdkey = REPLACE_ME > > ... > > The real cdkeys should be entered by the user. Then the file will > be > > parsed after kvm_tests.cfg, using the parse_file() method (in
Re: [RFC] KVM test: Refactoring the kvm control file and the config file
Michael Goldish wrote: > - "Lucas Meneghel Rodrigues" wrote: > >> Currently we have our kvm test control file and configuration file, >> having them split like this makes it harder for users to edit it, >> let's >> say, using the web frontend. >> >> So it might be good to merge the control file and the config file, >> and >> make a refactor on the control file code. Do you think this would be >> a valid approach? Any comments are welcome. >> >> Lucas > > What exactly do you mean by merge? Embed the entire config file in > the control file as a python string? > > A few comments: > > 1. The bulk of the config file usually doesn't need to be modified > from the web frontend, IMO. It actually doesn't need to be modified > very often -- once everything is defined, only minor changes are > required. > > 2. Changes to the config can be made in the control file rather easily > using kvm_config methods that are implemented but not currently used. > Instead of the short form: > > list = kvm_config.config(filename).get_list() > > we can use: > > cfg = kvm_config.config(filename) > > # parse any one-liner like this: > cfg.parse_string("only nightly") > > # parse anything the parser understands like this: > cfg.parse_string(""" > install: > steps = blah > foo = bar > only qcow2.*Windows > """) > > # we can parse several times and the effect is cumulative > cfg.parse_string(""" > variants: > - foo: > only scsi > - bar: > only WinVista.32 > variants: > - 1: > - 2: > """) > > # we can also parse additional files: > cfg.parse_file("windows_cdkeys.cfg") > > # finally, get the resulting list > list = cfg.get_list() > > 3. We may want to consider something in between having the control and > config completely separated (what we have today), and having them both > in the same file. For example, we can define the test sets (nightly, > weekly, fc8_quick, custom) in the config file, and select the test set > (e.g. "only nightly") in the control file by convention. Alternatively > we can omit the test sets from the config file, and just define a single > test set (the one we'll be using) in the control file, or define several > test sets in the control file, and select one of them. > We can actually do both things at the same time, by defining the test > sets in the config file, and defining a "full" test set among them (I > think it's already there), which doesn't modify anything. If we want to > use a standard test set from the config file, we can do "only nightly" > in the control, and if we want to use a custom test set, we can do: > cfg.parse_string(""" > only full > # define the test set below (no need for variants) > only RHEL > only qcow2 > only autotest.dbench > """) > > 4. It could be a good idea to make a "windows_cdkeys.cfg" file, that > contains mainly single-line exceptions, such as: > WinXP.32: cdkey = REPLACE_ME > WinXP.64: cdkey = REPLACE_ME > Win2003.32: cdkey = REPLACE_ME > ... > The real cdkeys should be entered by the user. Then the file will be > parsed after kvm_tests.cfg, using the parse_file() method (in the > control). This way the user won't have to enter the cdkeys into the > long config file every time it gets replaced by a newer version. The > cdkeys file won't be replaced because it's specific to the test > environment (we'll only supply a sample like we do with kvm_tests.cfg). > > Maybe we can generalize this idea and call the file local_prefs.cfg, > and decide that the file should contain any environment-specific > changes that the user wants to make to the config. The file will > contain mainly exceptions (single or multi-line). But I'm not sure > there are many environment specific things other than cdkeys, so maybe > this isn't necessary. > > > Let me know what you think. Michael all very good comments, I specifically like the windows config file idea. The way I always envisioned it was something like this.. The config file specifies the whole test matrix, ie all variants that you could run each test on, ie all os's, all archs, all disk types, all cpu/mem configurations. The control file would be more of a test specific config file, setting any local or environmental vars for each test, and like Michael said can override "stuff" fromt he main confg file... I also really like the idea of creating a generic kvm_test that all kvm tests would inherent from, ie. $AUTOTEST/client/common_lib/kvm_test.py All helper classes ie. kvm.py, kvm_utils.py, kvm_config.py, and even the config file itself could then go into $AUTOTEST/client/common_lib/test_utils/ or even maybe something like $AUTOTEST/client/common_lib/kvm_test_utils/ All kvm specific tests would inherent form the generic kvm_test, and then go into either $AUTOTEST/client/tests/ or $AUTOTEST/client/kvm_tests/ directorys, each having their own sub dir like the current autotest tests. In this dir there would be a control file specific for each test, t
Re: [Autotest] [KVM-AUTOTEST PATCH 12/17] KVM test: add simple timedrift test (mainly for Windows)
On Mon, Jul 20, 2009 at 06:07:19PM +0300, Michael Goldish wrote: > 1) Log into a guest. > 2) Take a time reading from the guest and host. > 3) Run load on the guest and host. > 4) Take a second time reading. > 5) Stop the load and rest for a while. > 6) Take a third time reading. > 7) If the drift immediately after load is higher than a user- > specified value (in %), fail. > If the drift after the rest period is higher than a user-specified value, > fail. > > Signed-off-by: Michael Goldish > --- > client/tests/kvm/kvm.py |1 + > client/tests/kvm/kvm_tests.py | 161 > - > 2 files changed, 160 insertions(+), 2 deletions(-) > > diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py > index b18b643..070e463 100644 > --- a/client/tests/kvm/kvm.py > +++ b/client/tests/kvm/kvm.py > @@ -55,6 +55,7 @@ class kvm(test.test): > "kvm_install": test_routine("kvm_install", > "run_kvm_install"), > "linux_s3": test_routine("kvm_tests", "run_linux_s3"), > "stress_boot": test_routine("kvm_tests", "run_stress_boot"), > +"timedrift":test_routine("kvm_tests", "run_timedrift"), > } > > # Make it possible to import modules from the test's bindir > diff --git a/client/tests/kvm/kvm_tests.py b/client/tests/kvm/kvm_tests.py > index 5991aed..ca0b8c0 100644 > --- a/client/tests/kvm/kvm_tests.py > +++ b/client/tests/kvm/kvm_tests.py > @@ -1,4 +1,4 @@ > -import time, os, logging > +import time, os, logging, re, commands > from autotest_lib.client.common_lib import utils, error > import kvm_utils, kvm_subprocess, ppm_utils, scan_results > > @@ -529,7 +529,6 @@ def run_stress_boot(tests, params, env): > """ > # boot the first vm > vm = kvm_utils.env_get_vm(env, params.get("main_vm")) > - > if not vm: > raise error.TestError("VM object not found in environment") > if not vm.is_alive(): > @@ -586,3 +585,161 @@ def run_stress_boot(tests, params, env): > for se in sessions: > se.close() > logging.info("Total number booted: %d" % (num -1)) > + > + > +def run_timedrift(test, params, env): > +""" > +Time drift test (mainly for Windows guests): > + > +1) Log into a guest. > +2) Take a time reading from the guest and host. > +3) Run load on the guest and host. > +4) Take a second time reading. > +5) Stop the load and rest for a while. > +6) Take a third time reading. > +7) If the drift immediately after load is higher than a user- > +specified value (in %), fail. > +If the drift after the rest period is higher than a user-specified value, > +fail. > + > +@param test: KVM test object. > +@param params: Dictionary with test parameters. > +@param env: Dictionary with the test environment. > +""" > +vm = kvm_utils.env_get_vm(env, params.get("main_vm")) > +if not vm: > +raise error.TestError("VM object not found in environment") > +if not vm.is_alive(): > +raise error.TestError("VM seems to be dead; Test requires a living > VM") > + > +logging.info("Waiting for guest to be up...") > + > +session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2) > +if not session: > +raise error.TestFail("Could not log into guest") > + > +logging.info("Logged in") > + > +# Collect test parameters: > +# Command to run to get the current time > +time_command = params.get("time_command") > +# Filter which should match a string to be passed to time.strptime() > +time_filter_re = params.get("time_filter_re") > +# Time format for time.strptime() > +time_format = params.get("time_format") > +guest_load_command = params.get("guest_load_command") > +guest_load_stop_command = params.get("guest_load_stop_command") > +host_load_command = params.get("host_load_command") > +guest_load_instances = int(params.get("guest_load_instances", "1")) > +host_load_instances = int(params.get("host_load_instances", "0")) > +# CPU affinity mask for taskset > +cpu_mask = params.get("cpu_mask", "0xFF") > +load_duration = float(params.get("load_duration", "30")) > +rest_duration = float(params.get("rest_duration", "10")) > +drift_threshold = float(params.get("drift_threshold", "200")) > +drift_threshold_after_rest = > float(params.get("drift_threshold_after_rest", > + "200")) > + > +guest_load_sessions = [] > +host_load_sessions = [] > + > +# Remember the VM's previous CPU affinity > +prev_cpu_mask = commands.getoutput("taskset -p %s" % vm.get_pid()) > +prev_cpu_mask = prev_cpu_mask.split()[-1] > +# Set the VM's CPU affinity > +commands.getoutput("taskset -p %s %s" % (cpu_mask, vm.get_pid())) > + > +try: > +# Get time before load > +host_time_0 = time.time() > +session.sendline(t
Re: [RFC] KVM test: Refactoring the kvm control file and the config file
- "David Huff" wrote: > Michael Goldish wrote: > > - "Lucas Meneghel Rodrigues" wrote: > > > >> Currently we have our kvm test control file and configuration > file, > >> having them split like this makes it harder for users to edit it, > >> let's > >> say, using the web frontend. > >> > >> So it might be good to merge the control file and the config file, > >> and > >> make a refactor on the control file code. Do you think this would > be > >> a valid approach? Any comments are welcome. > >> > >> Lucas > > > > What exactly do you mean by merge? Embed the entire config file in > > the control file as a python string? > > > > A few comments: > > > > 1. The bulk of the config file usually doesn't need to be modified > > from the web frontend, IMO. It actually doesn't need to be modified > > very often -- once everything is defined, only minor changes are > > required. > > > > 2. Changes to the config can be made in the control file rather > easily > > using kvm_config methods that are implemented but not currently > used. > > Instead of the short form: > > > > list = kvm_config.config(filename).get_list() > > > > we can use: > > > > cfg = kvm_config.config(filename) > > > > # parse any one-liner like this: > > cfg.parse_string("only nightly") > > > > # parse anything the parser understands like this: > > cfg.parse_string(""" > > install: > > steps = blah > > foo = bar > > only qcow2.*Windows > > """) > > > > # we can parse several times and the effect is cumulative > > cfg.parse_string(""" > > variants: > > - foo: > > only scsi > > - bar: > > only WinVista.32 > > variants: > > - 1: > > - 2: > > """) > > > > # we can also parse additional files: > > cfg.parse_file("windows_cdkeys.cfg") > > > > # finally, get the resulting list > > list = cfg.get_list() > > > > 3. We may want to consider something in between having the control > and > > config completely separated (what we have today), and having them > both > > in the same file. For example, we can define the test sets > (nightly, > > weekly, fc8_quick, custom) in the config file, and select the test > set > > (e.g. "only nightly") in the control file by convention. > Alternatively > > we can omit the test sets from the config file, and just define a > single > > test set (the one we'll be using) in the control file, or define > several > > test sets in the control file, and select one of them. > > We can actually do both things at the same time, by defining the > test > > sets in the config file, and defining a "full" test set among them > (I > > think it's already there), which doesn't modify anything. If we want > to > > use a standard test set from the config file, we can do "only > nightly" > > in the control, and if we want to use a custom test set, we can do: > > cfg.parse_string(""" > > only full > > # define the test set below (no need for variants) > > only RHEL > > only qcow2 > > only autotest.dbench > > """) > > > > 4. It could be a good idea to make a "windows_cdkeys.cfg" file, > that > > contains mainly single-line exceptions, such as: > > WinXP.32: cdkey = REPLACE_ME > > WinXP.64: cdkey = REPLACE_ME > > Win2003.32: cdkey = REPLACE_ME > > ... > > The real cdkeys should be entered by the user. Then the file will > be > > parsed after kvm_tests.cfg, using the parse_file() method (in the > > control). This way the user won't have to enter the cdkeys into the > > long config file every time it gets replaced by a newer version. > The > > cdkeys file won't be replaced because it's specific to the test > > environment (we'll only supply a sample like we do with > kvm_tests.cfg). > > > > Maybe we can generalize this idea and call the file > local_prefs.cfg, > > and decide that the file should contain any environment-specific > > changes that the user wants to make to the config. The file will > > contain mainly exceptions (single or multi-line). But I'm not sure > > there are many environment specific things other than cdkeys, so > maybe > > this isn't necessary. > > > > > > Let me know what you think. > > Michael all very good comments, I specifically like the windows > config > file idea. > > The way I always envisioned it was something like this.. > > The config file specifies the whole test matrix, ie all variants that > you could run each test on, ie all os's, all archs, all disk types, > all > cpu/mem configurations. > > The control file would be more of a test specific config file, > setting > any local or environmental vars for each test, and like Michael said > can > override "stuff" fromt he main confg file... > > I also really like the idea of creating a generic kvm_test that all > kvm > tests would inherent from, ie. > $AUTOTEST/client/common_lib/kvm_test.py > > All helper classes ie. kvm.py, kvm_utils.py, kvm_config.py, and even > the > config file itself could then go into > $AUTOTEST/client/common_lib/test_utils/ or even maybe somethin
Re: KVM crashes when using certain USB device
G wrote: > On Tue, Jul 21, 2009 at 1:23 AM, Jim Paris wrote: > > Here's a patch to try. I'm not familiar with the code, but it looks > > like this buffer might be too small versus the packet lengths that > > you're seeing, and similar definitions in hw/usb-uhci.c. > > > > -jim > > > > diff -urN kvm-87-orig/usb-linux.c kvm-87/usb-linux.c > > --- kvm-87-orig/usb-linux.c 2009-06-23 09:32:38.0 -0400 > > +++ kvm-87/usb-linux.c 2009-07-20 19:15:35.0 -0400 > > @@ -115,7 +115,7 @@ > > uint16_t offset; > > uint8_t state; > > struct usb_ctrlrequest req; > > - uint8_t buffer[1024]; > > + uint8_t buffer[2048]; > > }; > > > > typedef struct USBHostDevice { > > Yes! Applying this patch makes the crash go away! Thank you! Great! > In addition to enabling DEBUG and applying your debug printout > patches, I added a debug printout right above the memcpy()s in > usb-linux.c, and found that the memcpy() in do_token_in() is called > multiple time (since do_token_in() is called multiple times for the > 1993 bytes usb packet I have in my usb sniff dumps), which I guess is > what's causing a buffer overflow as the offset is pushed beyond 1024 > bytes. But I'm not sure. Yeah, I think that's it. > I've looked at the code trying to figure out a better way to solve > this, now that the problem spot has been found. To me it seems that > malloc()ing and, when the need arises (the large 1993 bytes packets > I'm seeing), realloc()ing the buffer, instead of using a statically > sized buffer, would be the best solution. Dynamically sizing the buffer might get tricky. It looks like hw/usb-uhci.c will go up to 2048, while hw/usb-ohci.c and hw/usb-musb.c could potentially go up to 8192. I think bumping it to 8192 and adding an error instead of overflowing would be good enough. I'll try to understand the code a bit more and then spin a patch. > One could of course redefine all buffers to be 8192 bytes instead, > but that would just be a false sense of security, and perhaps some > buffers need to be of a particular size to conform to the USB > specification... USB packets don't get that large, but the host controllers can combine them, from what I understand. So it's more a question of what the host controllers can do. -jim -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Autotest] [KVM-AUTOTEST PATCH 12/17] KVM test: add simple timedrift test (mainly for Windows)
- "Yolkfull Chow" wrote: > On Mon, Jul 20, 2009 at 06:07:19PM +0300, Michael Goldish wrote: > > 1) Log into a guest. > > 2) Take a time reading from the guest and host. > > 3) Run load on the guest and host. > > 4) Take a second time reading. > > 5) Stop the load and rest for a while. > > 6) Take a third time reading. > > 7) If the drift immediately after load is higher than a user- > > specified value (in %), fail. > > If the drift after the rest period is higher than a user-specified > value, > > fail. > > > > Signed-off-by: Michael Goldish > > --- > > client/tests/kvm/kvm.py |1 + > > client/tests/kvm/kvm_tests.py | 161 > - > > 2 files changed, 160 insertions(+), 2 deletions(-) > > > > diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py > > index b18b643..070e463 100644 > > --- a/client/tests/kvm/kvm.py > > +++ b/client/tests/kvm/kvm.py > > @@ -55,6 +55,7 @@ class kvm(test.test): > > "kvm_install": test_routine("kvm_install", > "run_kvm_install"), > > "linux_s3": test_routine("kvm_tests", > "run_linux_s3"), > > "stress_boot": test_routine("kvm_tests", > "run_stress_boot"), > > +"timedrift":test_routine("kvm_tests", > "run_timedrift"), > > } > > > > # Make it possible to import modules from the test's > bindir > > diff --git a/client/tests/kvm/kvm_tests.py > b/client/tests/kvm/kvm_tests.py > > index 5991aed..ca0b8c0 100644 > > --- a/client/tests/kvm/kvm_tests.py > > +++ b/client/tests/kvm/kvm_tests.py > > @@ -1,4 +1,4 @@ > > -import time, os, logging > > +import time, os, logging, re, commands > > from autotest_lib.client.common_lib import utils, error > > import kvm_utils, kvm_subprocess, ppm_utils, scan_results > > > > @@ -529,7 +529,6 @@ def run_stress_boot(tests, params, env): > > """ > > # boot the first vm > > vm = kvm_utils.env_get_vm(env, params.get("main_vm")) > > - > > if not vm: > > raise error.TestError("VM object not found in > environment") > > if not vm.is_alive(): > > @@ -586,3 +585,161 @@ def run_stress_boot(tests, params, env): > > for se in sessions: > > se.close() > > logging.info("Total number booted: %d" % (num -1)) > > + > > + > > +def run_timedrift(test, params, env): > > +""" > > +Time drift test (mainly for Windows guests): > > + > > +1) Log into a guest. > > +2) Take a time reading from the guest and host. > > +3) Run load on the guest and host. > > +4) Take a second time reading. > > +5) Stop the load and rest for a while. > > +6) Take a third time reading. > > +7) If the drift immediately after load is higher than a user- > > +specified value (in %), fail. > > +If the drift after the rest period is higher than a > user-specified value, > > +fail. > > + > > +@param test: KVM test object. > > +@param params: Dictionary with test parameters. > > +@param env: Dictionary with the test environment. > > +""" > > +vm = kvm_utils.env_get_vm(env, params.get("main_vm")) > > +if not vm: > > +raise error.TestError("VM object not found in > environment") > > +if not vm.is_alive(): > > +raise error.TestError("VM seems to be dead; Test requires a > living VM") > > + > > +logging.info("Waiting for guest to be up...") > > + > > +session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2) > > +if not session: > > +raise error.TestFail("Could not log into guest") > > + > > +logging.info("Logged in") > > + > > +# Collect test parameters: > > +# Command to run to get the current time > > +time_command = params.get("time_command") > > +# Filter which should match a string to be passed to > time.strptime() > > +time_filter_re = params.get("time_filter_re") > > +# Time format for time.strptime() > > +time_format = params.get("time_format") > > +guest_load_command = params.get("guest_load_command") > > +guest_load_stop_command = > params.get("guest_load_stop_command") > > +host_load_command = params.get("host_load_command") > > +guest_load_instances = int(params.get("guest_load_instances", > "1")) > > +host_load_instances = int(params.get("host_load_instances", > "0")) > > +# CPU affinity mask for taskset > > +cpu_mask = params.get("cpu_mask", "0xFF") > > +load_duration = float(params.get("load_duration", "30")) > > +rest_duration = float(params.get("rest_duration", "10")) > > +drift_threshold = float(params.get("drift_threshold", "200")) > > +drift_threshold_after_rest = > float(params.get("drift_threshold_after_rest", > > + "200")) > > + > > +guest_load_sessions = [] > > +host_load_sessions = [] > > + > > +# Remember the VM's previous CPU affinity > > +prev_cpu_mask = commands.getoutput("taskset -p %s" % > vm.get_
Re: [Autotest] [KVM-AUTOTEST PATCH 12/17] KVM test: add simple timedrift test (mainly for Windows)
On Tue, Jul 21, 2009 at 11:29:56AM -0400, Michael Goldish wrote: > > - "Yolkfull Chow" wrote: > > > On Mon, Jul 20, 2009 at 06:07:19PM +0300, Michael Goldish wrote: > > > 1) Log into a guest. > > > 2) Take a time reading from the guest and host. > > > 3) Run load on the guest and host. > > > 4) Take a second time reading. > > > 5) Stop the load and rest for a while. > > > 6) Take a third time reading. > > > 7) If the drift immediately after load is higher than a user- > > > specified value (in %), fail. > > > If the drift after the rest period is higher than a user-specified > > value, > > > fail. > > > > > > Signed-off-by: Michael Goldish > > > --- > > > client/tests/kvm/kvm.py |1 + > > > client/tests/kvm/kvm_tests.py | 161 > > - > > > 2 files changed, 160 insertions(+), 2 deletions(-) > > > > > > diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py > > > index b18b643..070e463 100644 > > > --- a/client/tests/kvm/kvm.py > > > +++ b/client/tests/kvm/kvm.py > > > @@ -55,6 +55,7 @@ class kvm(test.test): > > > "kvm_install": test_routine("kvm_install", > > "run_kvm_install"), > > > "linux_s3": test_routine("kvm_tests", > > "run_linux_s3"), > > > "stress_boot": test_routine("kvm_tests", > > "run_stress_boot"), > > > +"timedrift":test_routine("kvm_tests", > > "run_timedrift"), > > > } > > > > > > # Make it possible to import modules from the test's > > bindir > > > diff --git a/client/tests/kvm/kvm_tests.py > > b/client/tests/kvm/kvm_tests.py > > > index 5991aed..ca0b8c0 100644 > > > --- a/client/tests/kvm/kvm_tests.py > > > +++ b/client/tests/kvm/kvm_tests.py > > > @@ -1,4 +1,4 @@ > > > -import time, os, logging > > > +import time, os, logging, re, commands > > > from autotest_lib.client.common_lib import utils, error > > > import kvm_utils, kvm_subprocess, ppm_utils, scan_results > > > > > > @@ -529,7 +529,6 @@ def run_stress_boot(tests, params, env): > > > """ > > > # boot the first vm > > > vm = kvm_utils.env_get_vm(env, params.get("main_vm")) > > > - > > > if not vm: > > > raise error.TestError("VM object not found in > > environment") > > > if not vm.is_alive(): > > > @@ -586,3 +585,161 @@ def run_stress_boot(tests, params, env): > > > for se in sessions: > > > se.close() > > > logging.info("Total number booted: %d" % (num -1)) > > > + > > > + > > > +def run_timedrift(test, params, env): > > > +""" > > > +Time drift test (mainly for Windows guests): > > > + > > > +1) Log into a guest. > > > +2) Take a time reading from the guest and host. > > > +3) Run load on the guest and host. > > > +4) Take a second time reading. > > > +5) Stop the load and rest for a while. > > > +6) Take a third time reading. > > > +7) If the drift immediately after load is higher than a user- > > > +specified value (in %), fail. > > > +If the drift after the rest period is higher than a > > user-specified value, > > > +fail. > > > + > > > +@param test: KVM test object. > > > +@param params: Dictionary with test parameters. > > > +@param env: Dictionary with the test environment. > > > +""" > > > +vm = kvm_utils.env_get_vm(env, params.get("main_vm")) > > > +if not vm: > > > +raise error.TestError("VM object not found in > > environment") > > > +if not vm.is_alive(): > > > +raise error.TestError("VM seems to be dead; Test requires a > > living VM") > > > + > > > +logging.info("Waiting for guest to be up...") > > > + > > > +session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2) > > > +if not session: > > > +raise error.TestFail("Could not log into guest") > > > + > > > +logging.info("Logged in") > > > + > > > +# Collect test parameters: > > > +# Command to run to get the current time > > > +time_command = params.get("time_command") > > > +# Filter which should match a string to be passed to > > time.strptime() > > > +time_filter_re = params.get("time_filter_re") > > > +# Time format for time.strptime() > > > +time_format = params.get("time_format") > > > +guest_load_command = params.get("guest_load_command") > > > +guest_load_stop_command = > > params.get("guest_load_stop_command") > > > +host_load_command = params.get("host_load_command") > > > +guest_load_instances = int(params.get("guest_load_instances", > > "1")) > > > +host_load_instances = int(params.get("host_load_instances", > > "0")) > > > +# CPU affinity mask for taskset > > > +cpu_mask = params.get("cpu_mask", "0xFF") > > > +load_duration = float(params.get("load_duration", "30")) > > > +rest_duration = float(params.get("rest_duration", "10")) > > > +drift_threshold = float(params.get("drift_threshold", "200")) > > > +drift_threshold_after_rest =
KVM: SVM: force new asid on vcpu migration
If a migrated vcpu matches the asid_generation value of the target pcpu, there will be no TLB flush via TLB_CONTROL_FLUSH_ALL_ASID. The check for vcpu.cpu in pre_svm_run is meaningless since svm_vcpu_load already updated it on schedule in. Such vcpu will VMRUN with stale TLB entries. Based on original patch from Joerg Roedel (http://patchwork.kernel.org/patch/10021/) Signed-off-by: Marcelo Tosatti diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 18085d3..90fe88f 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -739,6 +739,7 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) svm->vmcb->control.tsc_offset += delta; vcpu->cpu = cpu; kvm_migrate_timers(vcpu); + svm->asid_generation = 0; } for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) @@ -1071,7 +1072,6 @@ static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *svm_data) svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID; } - svm->vcpu.cpu = svm_data->cpu; svm->asid_generation = svm_data->asid_generation; svm->vmcb->control.asid = svm_data->next_asid++; } @@ -2320,8 +2320,8 @@ static void pre_svm_run(struct vcpu_svm *svm) struct svm_cpu_data *svm_data = per_cpu(svm_data, cpu); svm->vmcb->control.tlb_ctl = TLB_CONTROL_DO_NOTHING; - if (svm->vcpu.cpu != cpu || - svm->asid_generation != svm_data->asid_generation) + /* FIXME: handle wraparound of asid_generation */ + if (svm->asid_generation != svm_data->asid_generation) new_asid(svm, svm_data); } -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: KVM: SVM: force new asid on vcpu migration
On Tue, Jul 21, 2009 at 12:47:45PM -0300, Marcelo Tosatti wrote: > > If a migrated vcpu matches the asid_generation value of the target pcpu, > there will be no TLB flush via TLB_CONTROL_FLUSH_ALL_ASID. > > The check for vcpu.cpu in pre_svm_run is meaningless since svm_vcpu_load > already updated it on schedule in. > > Such vcpu will VMRUN with stale TLB entries. > > Based on original patch from Joerg Roedel > (http://patchwork.kernel.org/patch/10021/) > > Signed-off-by: Marcelo Tosatti Acked-by: Joerg Roedel > > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index 18085d3..90fe88f 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -739,6 +739,7 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > svm->vmcb->control.tsc_offset += delta; > vcpu->cpu = cpu; > kvm_migrate_timers(vcpu); > + svm->asid_generation = 0; > } > > for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) > @@ -1071,7 +1072,6 @@ static void new_asid(struct vcpu_svm *svm, struct > svm_cpu_data *svm_data) > svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID; > } > > - svm->vcpu.cpu = svm_data->cpu; > svm->asid_generation = svm_data->asid_generation; > svm->vmcb->control.asid = svm_data->next_asid++; > } > @@ -2320,8 +2320,8 @@ static void pre_svm_run(struct vcpu_svm *svm) > struct svm_cpu_data *svm_data = per_cpu(svm_data, cpu); > > svm->vmcb->control.tlb_ctl = TLB_CONTROL_DO_NOTHING; > - if (svm->vcpu.cpu != cpu || > - svm->asid_generation != svm_data->asid_generation) > + /* FIXME: handle wraparound of asid_generation */ > + if (svm->asid_generation != svm_data->asid_generation) > new_asid(svm, svm_data); > } > > -- -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] qemu-kvm: reserve the low 24 gsi values
reserve gsi 0 to 23 so that they won't be allocated for msi Signed-off-by: Michael S. Tsirkin --- diff --git a/qemu-kvm.c b/qemu-kvm.c index c6c9fc6..f440b2d 100644 --- a/qemu-kvm.c +++ b/qemu-kvm.c @@ -1613,10 +1613,12 @@ int kvm_get_irq_route_gsi(kvm_context_t kvm) { int i, bit; uint32_t *buf = kvm->used_gsi_bitmap; + uint32_t mask = 0xff00; /* Return the lowest unused GSI in the bitmap */ - for (i = 0; i < kvm->max_gsi / 32; i++) { - bit = ffs(~buf[i]); + for (i = 0; i < kvm->max_gsi / 32; i++) { + bit = ffs(~buf[i] & mask); + mask = 0x; if (!bit) continue; -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 0/2] virtio: device removal fixes
Here are a couple of obviously correct fixes for virtio device removal. Since these fix regressions (for devices with msi-x capability), I think we need them for 2.6.31. Sorry about the late notice. Michael S. Tsirkin (2): virtio: fix memory leak on device removal virtio: fix double free_irq drivers/virtio/virtio_pci.c |7 ++- 1 files changed, 6 insertions(+), 1 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 2/2] virtio: fix double free_irq
Decrement used vectors counter when removing the vq so that vp_free_vectors does not try to free the vector again. Signed-off-by: Michael S. Tsirkin --- drivers/virtio/virtio_pci.c |4 +++- 1 files changed, 3 insertions(+), 1 deletions(-) diff --git a/drivers/virtio/virtio_pci.c b/drivers/virtio/virtio_pci.c index dab3c86..9dcc368 100644 --- a/drivers/virtio/virtio_pci.c +++ b/drivers/virtio/virtio_pci.c @@ -466,8 +466,10 @@ static void vp_del_vq(struct virtqueue *vq) iowrite16(info->queue_index, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_SEL); - if (info->vector != VIRTIO_MSI_NO_VECTOR) + if (info->vector != VIRTIO_MSI_NO_VECTOR) { free_irq(vp_dev->msix_entries[info->vector].vector, vq); + --vp_dev->msix_used_vectors; + } if (vp_dev->msix_enabled) { iowrite16(VIRTIO_MSI_NO_VECTOR, -- 1.6.2.5 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
KVM Irq injection
Hi, I am building some device models around the KVM and am intersted in injecting IRQs. The current LibKVM.h provides following function int kvm_inject_irq ( kvm_context_t kvm, int vcpu, unsignedirq ) Simulate an external vectored interrupt. This allows you to simulate an external vectored interrupt. Parameters: kvm Pointer to the current kvm_context vcpuWhich virtual CPU should get dumped irq Vector number Returns: 0 on success My question is if I am say Injecting IRQ on pin 0 on PIC how do I get croosponding vector? I am using KVM's interrupt controllers. I also see another function int kvm_set_irq_level ( kvm_context_t kvm, int irq, int level ) Can this be used safely to inject IRQ- I am gusseing irq refers to the pin and Level the value of that pin. How do I tell with this function which CPU I want to inject interrupt? -Abhishek -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 1/2] virtio: fix memory leak on device removal
Free up msi vector tables. Signed-off-by: Michael S. Tsirkin --- drivers/virtio/virtio_pci.c |3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diff --git a/drivers/virtio/virtio_pci.c b/drivers/virtio/virtio_pci.c index 193c8f0..dab3c86 100644 --- a/drivers/virtio/virtio_pci.c +++ b/drivers/virtio/virtio_pci.c @@ -489,12 +489,15 @@ static void vp_del_vq(struct virtqueue *vq) /* the config->del_vqs() implementation */ static void vp_del_vqs(struct virtio_device *vdev) { + struct virtio_pci_device *vp_dev = to_vp_device(vdev); struct virtqueue *vq, *n; list_for_each_entry_safe(vq, n, &vdev->vqs, list) vp_del_vq(vq); vp_free_vectors(vdev); + kfree(vp_dev->msix_names); + kfree(vp_dev->msix_entries); } /* the config->find_vqs() implementation */ -- 1.6.2.5 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [KVM_AUTOTEST] add kvm hugepage variant
Well, thank you for notifications, I'll keep them in my mind. Also the problem with mempath vs. mem-path is solved. It was just a misspell in one version of KVM. * fixed patch attached Dne 20.7.2009 14:58, Lucas Meneghel Rodrigues napsal(a): On Fri, 2009-07-10 at 12:01 +0200, Lukáš Doktor wrote: After discussion I split the patches. Hi Lukáš, sorry for the delay answering your patch. Looks good to me in general, I have some remarks to make: 1) When posting patches to the autotest kvm tests, please cross post the autotest mailing list (autot...@test.kernel.org) and the KVM list. 2) About scripts to prepare the environment to perform tests - we've had some discussion about including shell scripts on autotest. Bottom line, autotest has a policy of not including non python code when possible [1]. So, would you mind re-creating your hugepage setup code in python and re-sending it? Thanks for your contribution, looking forward getting it integrated to our tests. [1] Unless when it is not practical for testing purposes - writing tests in C is just fine, for example. This patch adds kvm_hugepage variant. It prepares the host system and start vm with -mem-path option. It does not clean after itself, because it's impossible to unmount and free hugepages before all guests are destroyed. I need to ask you what to do with change of qemu parameter. Newest versions are using -mempath insted of -mem-path. This is impossible to fix using current config file. I can see 2 solutions: 1) direct change in kvm_vm.py (parse output and try another param) 2) detect qemu capabilities outside and create additional layer (better for future occurrence) Dne 9.7.2009 11:24, Lukáš Doktor napsal(a): This patch adds kvm_hugepage variant. It prepares the host system and start vm with -mem-path option. It does not clean after itself, because it's impossible to unmount and free hugepages before all guests are destroyed. There is also added autotest.libhugetlbfs test. I need to ask you what to do with change of qemu parameter. Newest versions are using -mempath insted of -mem-path. This is impossible to fix using current config file. I can see 2 solutions: 1) direct change in kvm_vm.py (parse output and try another param) 2) detect qemu capabilities outside and create additional layer (better for future occurrence) Tested by:ldok...@redhat.com on RHEL5.4 with kvm-83-72.el5 diff --git a/client/tests/kvm/kvm_tests.cfg.sample b/client/tests/kvm/kvm_tests.cfg.sample index 5bd6eb8..70e290d 100644 --- a/client/tests/kvm/kvm_tests.cfg.sample +++ b/client/tests/kvm/kvm_tests.cfg.sample @@ -555,6 +555,13 @@ variants: only default image_format = raw +variants: +- @kvm_smallpages: +- kvm_hugepages: +hugepage_path = /mnt/hugepage +pre_command = "/usr/bin/python scripts/hugepage.py" +extra_params += " -mem-path /mnt/hugepage" + variants: - @basic: @@ -568,6 +575,7 @@ variants: only Fedora.8.32 only install setup boot shutdown only rtl8139 +only kvm_smallpages - @sample1: only qcow2 only ide diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py index 48f2916..2b97ccc 100644 --- a/client/tests/kvm/kvm_vm.py +++ b/client/tests/kvm/kvm_vm.py @@ -412,6 +412,13 @@ class VM: self.destroy() return False +if output: +logging.debug("qemu produced some output:\n%s", output) +if "alloc_mem_area" in output: +logging.error("Could not allocate hugepage memory" + " -- qemu command:\n%s", qemu_command) +return False + logging.debug("VM appears to be alive with PID %d", self.pid) return True diff -Narup a/client/tests/kvm/scripts/hugepage.py b/client/tests/kvm/scripts/ hugepage.py --- a/client/tests/kvm/scripts/hugepage.py 1970-01-01 01:00:00.0 +0100 +++ a/client/tests/kvm/scripts/hugepage.py2009-07-21 16:47:00.0 +0200 @@ -0,0 +1,63 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- +# Alocates enough hugepages and mount hugetlbfs +import os, sys, time + +# Variables check & set +vms = os.environ['KVM_TEST_vms'].split().__len__() +try: +max_vms = int(os.environ['KVM_TEST_max_vms']) +except KeyError: +max_vms = 0 +mem = int(os.environ['KVM_TEST_mem']) +hugepage_path = os.environ['KVM_TEST_hugepage_path'] + +fmeminfo = open("/proc/meminfo", "r") +while fmeminfo: + line = fmeminfo.readline() + if line.startswith("Hugepagesize"): + dumm, hp_size, dumm = line.split() + break +fmeminfo.close() + +if not hp_size: +print "Could not get Hugepagesize from /proc/meminfo file" +raise ValueError + +if vms < max_vms: +vms = max_vms + +vmsm = ((vms * mem) + (vms * 64)) +target = (vmsm * 1024 / int(hp_size)) + +# Iteratively set # of hugepages +fhp = open("/proc/sys/vm/nr_huge
Re: [PATCH 1/2] virtio: fix memory leak on device removal
Free up msi vector tables. Signed-off-by: Michael S. Tsirkin --- Resending with corrected To list. Sorry about the churn. drivers/virtio/virtio_pci.c |3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diff --git a/drivers/virtio/virtio_pci.c b/drivers/virtio/virtio_pci.c index 193c8f0..dab3c86 100644 --- a/drivers/virtio/virtio_pci.c +++ b/drivers/virtio/virtio_pci.c @@ -489,12 +489,15 @@ static void vp_del_vq(struct virtqueue *vq) /* the config->del_vqs() implementation */ static void vp_del_vqs(struct virtio_device *vdev) { + struct virtio_pci_device *vp_dev = to_vp_device(vdev); struct virtqueue *vq, *n; list_for_each_entry_safe(vq, n, &vdev->vqs, list) vp_del_vq(vq); vp_free_vectors(vdev); + kfree(vp_dev->msix_names); + kfree(vp_dev->msix_entries); } /* the config->find_vqs() implementation */ -- 1.6.2.5 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [RFC] KVM test: Refactoring the kvm control file and the config file
Michael Goldish wrote: > - "David Huff" wrote: >> The way I always envisioned it was something like this.. >> >> The config file specifies the whole test matrix, ie all variants that >> you could run each test on, ie all os's, all archs, all disk types, >> all >> cpu/mem configurations. >> >> The control file would be more of a test specific config file, >> setting >> any local or environmental vars for each test, and like Michael said >> can >> override "stuff" fromt he main confg file... >> >> I also really like the idea of creating a generic kvm_test that all >> kvm >> tests would inherent from, ie. >> $AUTOTEST/client/common_lib/kvm_test.py >> >> All helper classes ie. kvm.py, kvm_utils.py, kvm_config.py, and even >> the >> config file itself could then go into >> $AUTOTEST/client/common_lib/test_utils/ or even maybe something like >> $AUTOTEST/client/common_lib/kvm_test_utils/ >> >> All kvm specific tests would inherent form the generic kvm_test, and >> then go into either $AUTOTEST/client/tests/ or >> $AUTOTEST/client/kvm_tests/ directorys, each having their own sub dir >> like the current autotest tests. In this dir there would be a >> control >> file specific for each test, that can override the full test matrix >> descried inthe the generic kvm_tests.cfg, as well as any additional >> file >> required by the test. >> >> Anyway just some of my thoughts, I know its great in theory however >> may >> have some implementation short falls, like interdependence between >> tests >> and such... >> >> >> Comments.. > > I think I understand your suggestion, but let me make sure: > > - If there's a global config file that is shared by all tests, I suppose > it'll run all the tests one by one, right? > > - Where will test sets be defined -- in the global config file? > > - If each individual test inherits from the global config file, it'll > also inherit dictionaries describing other tests, right? > e.g. the configuration of the boot test must explicitly state "only boot", > or it'll run install, migration and autotest as well? > > - If you run the control file of a specific test, what happens -- does > that specific test run in many configurations (many guests, cpu options, > network, ide/scsi), or does it run just once with a single configuration? > I suppose the "normal" behavior would be to run in many configurations, but > I'm not sure what your intention was. > > - Will the global config file look like the config files we have today? > > I think this should be possible to implement, but I haven't given it much > thought so I'm not sure. The more interesting question is whether it's a > good idea. What are the advantages over the current approach? The advantages I see are: 1. it more closely follows the current autotest structure/layout, 2. solves the problem of separating each test out of the ever growing kvm_test.py and gives a sub dir of each test for better structure (something we have been talking about) and 3. addresses the config vs. control file ? that this thread originally brought up. I think the issue is in how the "kvm test" is viewed. Is it one test that gets run against several configurations, or is it several different tests with different configurations?. I have been looking at it as the later however I do also see it the other way as well. So maybe the solution is a little different than my first thought - all kvm tests are in $AUTOTEST/client/kvm_tests/ - all kvm tests inherent form $AUTOTEST/client/common_lib/kvm_test.py - common functionality is in $AUTOTEST/client/common_lib/kvm_test_utils/ - does *not* include generic kvm_test.cfg - we keep the $AUTOTEST/client/kvm/ test dir which defines the test runs and houses kvm_test.cfg file and a master control. - we could then define a couple sample test runs: full, quick, and others or implement something like your kvm_tests.common file that other test runs can build on. So in the end its pretty similar to what we currently have except that the AUTOTEST/client/kvm/ dir only defines the test runs, all common functionality and the tests them selves are moved out. The major advantages I see, aside from the three mention above, is that it allows us to simplify the kvm_tests.cfg file. We can move the test specific config to each test dir, $AUTOTEST/client/kvm_tests/install/install.cfg which includes all the install test parameters. Which combined with splitting up the config file in $AUTOTEST/client/kvm/ would make the config file shorter and easier to read. Again not sure if all this is worth it however some of my thoughts on how to improve the current status. -D -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: qemu-kvm missing some msix capability check
On Fri, Jul 17, 2009 at 06:34:40PM +0530, Amit Shah wrote: > Hello, > > Using recent qemu-kvm userspace with a slightly older kernel module I > get this when using the virtio-net device: > > kvm_msix_add: kvm_get_irq_route_gsi failed: No space left on device > > ... and the guest doesn't use the net device. > > This goes away when using a newer kvm module. > > Amit Could you verify if the following helps please? --- virtio: retry on vector assignment failure virtio currently fails to find any vqs if the device supports msi-x but fails to assign it to a queue. Turns out, this actually happens for old host kernels which can't allocate a gsi for the vector. As a result guest does not use virtio net. Fix this by disabling msi on such failures and falling back on regular interrupts. Signed-off-by: Michael S. Tsirkin diff --git a/drivers/virtio/virtio_pci.c b/drivers/virtio/virtio_pci.c index 9dcc368..567c972 100644 --- a/drivers/virtio/virtio_pci.c +++ b/drivers/virtio/virtio_pci.c @@ -273,26 +273,35 @@ static void vp_free_vectors(struct virtio_device *vdev) } static int vp_enable_msix(struct pci_dev *dev, struct msix_entry *entries, - int *options, int noptions) + int nvectors) { - int i; - for (i = 0; i < noptions; ++i) - if (!pci_enable_msix(dev, entries, options[i])) - return options[i]; - return -EBUSY; + int err = pci_enable_msix(dev, entries, nvectors); + if (err > 0) + err = -ENOSPC; + return err; } -static int vp_request_vectors(struct virtio_device *vdev, unsigned max_vqs) +static int vp_request_irq(struct virtio_device *vdev) +{ + struct virtio_pci_device *vp_dev = to_vp_device(vdev); + int err; + /* Can't allocate enough MSI-X vectors, use regular interrupt */ + vp_dev->msix_vectors = 0; + err = request_irq(vp_dev->pci_dev->irq, vp_interrupt, + IRQF_SHARED, dev_name(&vp_dev->vdev.dev), vp_dev); + if (err) + return err; + vp_dev->intx_enabled = 1; + return 0; +} + +static int vp_request_vectors(struct virtio_device *vdev, unsigned max_vqs, + int nvectors) { struct virtio_pci_device *vp_dev = to_vp_device(vdev); const char *name = dev_name(&vp_dev->vdev.dev); unsigned i, v; int err = -ENOMEM; - /* We want at most one vector per queue and one for config changes. -* Fallback to separate vectors for config and a shared for queues. -* Finally fall back to regular interrupts. */ - int options[] = { max_vqs + 1, 2 }; - int nvectors = max(options[0], options[1]); vp_dev->msix_entries = kmalloc(nvectors * sizeof *vp_dev->msix_entries, GFP_KERNEL); @@ -307,37 +316,29 @@ static int vp_request_vectors(struct virtio_device *vdev, unsigned max_vqs) vp_dev->msix_entries[i].entry = i; err = vp_enable_msix(vp_dev->pci_dev, vp_dev->msix_entries, -options, ARRAY_SIZE(options)); - if (err < 0) { - /* Can't allocate enough MSI-X vectors, use regular interrupt */ - vp_dev->msix_vectors = 0; - err = request_irq(vp_dev->pci_dev->irq, vp_interrupt, - IRQF_SHARED, name, vp_dev); - if (err) - goto error_irq; - vp_dev->intx_enabled = 1; - } else { - vp_dev->msix_vectors = err; - vp_dev->msix_enabled = 1; - - /* Set the vector used for configuration */ - v = vp_dev->msix_used_vectors; - snprintf(vp_dev->msix_names[v], sizeof *vp_dev->msix_names, -"%s-config", name); - err = request_irq(vp_dev->msix_entries[v].vector, - vp_config_changed, 0, vp_dev->msix_names[v], - vp_dev); - if (err) - goto error_irq; - ++vp_dev->msix_used_vectors; - - iowrite16(v, vp_dev->ioaddr + VIRTIO_MSI_CONFIG_VECTOR); - /* Verify we had enough resources to assign the vector */ - v = ioread16(vp_dev->ioaddr + VIRTIO_MSI_CONFIG_VECTOR); - if (v == VIRTIO_MSI_NO_VECTOR) { - err = -EBUSY; - goto error_irq; - } +nvectors); + if (err) + goto error_enable; + vp_dev->msix_vectors = nvectors; + vp_dev->msix_enabled = 1; + + /* Set the vector used for configuration */ + v = vp_dev->msix_used_vectors; + snprintf(vp_dev->msix_names[v], sizeof *vp_dev->msix_names, +"%s-config", name); + err = request_irq(vp_dev->msix_entries[v].vector, +
Re: [PATCH 1/2][v2] KVM: Introduce KVM_SET_IDENTITY_MAP_ADDR ioctl
On Tue, Jul 21, 2009 at 10:42:48AM +0800, Sheng Yang wrote: > Now KVM allow guest to modify guest's physical address of EPT's identity > mapping page. > > (change from v1, discard unnecessary check, change ioctl to accept parameter > address rather than value) > > Signed-off-by: Sheng Yang > --- > arch/x86/include/asm/kvm_host.h |1 + > arch/x86/kvm/vmx.c | 13 + > arch/x86/kvm/x86.c | 19 +++ > include/linux/kvm.h |2 ++ > 4 files changed, 31 insertions(+), 4 deletions(-) Applied both, thanks. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] kvm: Drop obsolete cpu_get/put in make_all_cpus_request
On Tue, Jul 21, 2009 at 10:24:08AM +0200, Jan Kiszka wrote: > Marcelo Tosatti wrote: > > Jan, > > > > This was suggested but we thought it might be safer to keep the > > get_cpu/put_cpu pair in case -rt kernels require it (which might be > > bullshit, but nobody verified). > > -rt stumbles over both patterns (that's why I stumbled over it in the > first place: get_cpu disables preemption, but spin_lock is a sleeping > lock under -rt) and actually requires requests_lock to become > raw_spinlock_t. Reordering get_cpu and spin_lock would be another > option, but not really a gain for both scenarios. I see. > So unless there is a way to make the whole critical section preemptible > (thus migration-agnostic), I think we can micro-optimize it like this. Can't you switch requests_lock to be raw_spinlock_t then? (or whatever is necessary to make it -rt compatible). -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: qemu-kvm missing some msix capability check
On (Tue) Jul 21 2009 [19:54:00], Michael S. Tsirkin wrote: > On Fri, Jul 17, 2009 at 06:34:40PM +0530, Amit Shah wrote: > > Hello, > > > > Using recent qemu-kvm userspace with a slightly older kernel module I > > get this when using the virtio-net device: > > > > kvm_msix_add: kvm_get_irq_route_gsi failed: No space left on device > > > > ... and the guest doesn't use the net device. > > > > This goes away when using a newer kvm module. > > > > Amit > > Could you verify if the following helps please? What is this based on? Fails to apply on kvm/master. Amit -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: KVM Irq injection
On Tue, Jul 21, 2009 at 08:57:26AM -0700, Saksena, Abhishek wrote: > > > > Hi, > I am building some device models around the KVM and am intersted in injecting > IRQs. > > The current LibKVM.h provides following function > > int kvm_inject_irq ( > kvm_context_t >kvm, > int vcpu, > unsignedirq > ) > > Simulate an external vectored interrupt. > > This allows you to simulate an external vectored interrupt. > > Parameters: > kvm Pointer to the current > kvm_context > vcpuWhich virtual CPU should get dumped > irq Vector number > > Returns: > 0 on success > > My question is if I am say Injecting IRQ on pin 0 on PIC how do I get > croosponding vector? I am using KVM's interrupt controllers. > This ioctl should be used if userspace emulates HW that maps IRQ to interrupt vector (PIC/IOAPIC/LAPIC). > > > I also see another function > > > > int kvm_set_irq_level ( > kvm_context_t > kvm, int irq, int level ) > > > > Can this be used safely to inject IRQ- I am gusseing irq refers to the pin > and Level the value of that pin. How do I tell with this function which CPU I > want to inject interrupt? > > And this one should be used if PIC/IOAPIC/LAPIC are emulated by a kernel. -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 0/5] qemu-kvm cleanups: ioctl merge
Marcelo, This is a resend of the ioctl series. I'm now changing all call sites to reflect upstream behaviour. Thanks! -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 1/5] remove kvm types from handle unhandled
I'm in an ongoing process of not using kvm-specific types in function declarations. handle_unhandled() is the first victim. Since we don't really use this data, but just the reason, remove them entirely. Signed-off-by: Glauber Costa --- qemu-kvm.c |9 +++-- 1 files changed, 3 insertions(+), 6 deletions(-) diff --git a/qemu-kvm.c b/qemu-kvm.c index c13ecba..2484bd9 100644 --- a/qemu-kvm.c +++ b/qemu-kvm.c @@ -176,8 +176,7 @@ int kvm_mmio_write(void *opaque, uint64_t addr, uint8_t *data, int len) return 0; } -static int handle_unhandled(kvm_context_t kvm, kvm_vcpu_context_t vcpu, -uint64_t reason) +static int handle_unhandled(uint64_t reason) { fprintf(stderr, "kvm: unhandled exit %"PRIx64"\n", reason); return -EINVAL; @@ -1085,12 +1084,10 @@ again: if (1) { switch (run->exit_reason) { case KVM_EXIT_UNKNOWN: - r = handle_unhandled(kvm, vcpu, - run->hw.hardware_exit_reason); + r = handle_unhandled(run->hw.hardware_exit_reason); break; case KVM_EXIT_FAIL_ENTRY: - r = handle_unhandled(kvm, vcpu, - run->fail_entry.hardware_entry_failure_reason); + r = handle_unhandled(run->fail_entry.hardware_entry_failure_reason); break; case KVM_EXIT_EXCEPTION: fprintf(stderr, "exception %d (%x)\n", -- 1.6.2.2 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 2/5] reuse kvm_vm_ioctl
Start using kvm_vm_ioctl's code. For type safety, delete vm_fd from kvm_context entirely, so the compiler can play along with us helping to detect errors I might have made. Also, we were slightly different from qemu upstream in handling error code from ioctl, since we were always testing for -1, while kvm_vm_ioctl returns -errno. We already did this in most of the call sites, so this patch has the big advantage of simplifying call sites. Diffstat says: 4 files changed, 58 insertions(+), 134 deletions(-) Signed-off-by: Glauber Costa --- kvm-all.c |2 + qemu-kvm-x86.c | 51 ++ qemu-kvm.c | 133 +--- qemu-kvm.h |6 +- 4 files changed, 58 insertions(+), 134 deletions(-) diff --git a/kvm-all.c b/kvm-all.c index 67908a7..9373d99 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -809,6 +809,7 @@ int kvm_ioctl(KVMState *s, int type, ...) return ret; } +#endif int kvm_vm_ioctl(KVMState *s, int type, ...) { @@ -827,6 +828,7 @@ int kvm_vm_ioctl(KVMState *s, int type, ...) return ret; } +#ifdef KVM_UPSTREAM int kvm_vcpu_ioctl(CPUState *env, int type, ...) { int ret; diff --git a/qemu-kvm-x86.c b/qemu-kvm-x86.c index df40aae..eec64db 100644 --- a/qemu-kvm-x86.c +++ b/qemu-kvm-x86.c @@ -40,10 +40,10 @@ int kvm_set_tss_addr(kvm_context_t kvm, unsigned long addr) r = ioctl(kvm->fd, KVM_CHECK_EXTENSION, KVM_CAP_SET_TSS_ADDR); if (r > 0) { - r = ioctl(kvm->vm_fd, KVM_SET_TSS_ADDR, addr); - if (r == -1) { + r = kvm_vm_ioctl(kvm_state, KVM_SET_TSS_ADDR, addr); + if (r < 0) { fprintf(stderr, "kvm_set_tss_addr: %m\n"); - return -errno; + return r; } return 0; } @@ -82,7 +82,7 @@ static int kvm_create_pit(kvm_context_t kvm) if (!kvm->no_pit_creation) { r = ioctl(kvm->fd, KVM_CHECK_EXTENSION, KVM_CAP_PIT); if (r > 0) { - r = ioctl(kvm->vm_fd, KVM_CREATE_PIT); + r = kvm_vm_ioctl(kvm_state, KVM_CREATE_PIT); if (r >= 0) kvm->pit_in_kernel = 1; else { @@ -211,7 +211,6 @@ int kvm_create_memory_alias(kvm_context_t kvm, .memory_size = len, .target_phys_addr = target_phys, }; - int fd = kvm->vm_fd; int r; int slot; @@ -222,7 +221,7 @@ int kvm_create_memory_alias(kvm_context_t kvm, return -EBUSY; alias.slot = slot; - r = ioctl(fd, KVM_SET_MEMORY_ALIAS, &alias); + r = kvm_vm_ioctl(kvm_state, KVM_SET_MEMORY_ALIAS, &alias); if (r == -1) return -errno; @@ -269,55 +268,31 @@ int kvm_set_lapic(kvm_vcpu_context_t vcpu, struct kvm_lapic_state *s) int kvm_get_pit(kvm_context_t kvm, struct kvm_pit_state *s) { - int r; if (!kvm->pit_in_kernel) return 0; - r = ioctl(kvm->vm_fd, KVM_GET_PIT, s); - if (r == -1) { - r = -errno; - perror("kvm_get_pit"); - } - return r; + return kvm_vm_ioctl(kvm_state, KVM_GET_PIT, s); } int kvm_set_pit(kvm_context_t kvm, struct kvm_pit_state *s) { - int r; if (!kvm->pit_in_kernel) return 0; - r = ioctl(kvm->vm_fd, KVM_SET_PIT, s); - if (r == -1) { - r = -errno; - perror("kvm_set_pit"); - } - return r; + return kvm_vm_ioctl(kvm_state, KVM_SET_PIT, s); } #ifdef KVM_CAP_PIT_STATE2 int kvm_get_pit2(kvm_context_t kvm, struct kvm_pit_state2 *ps2) { - int r; if (!kvm->pit_in_kernel) return 0; - r = ioctl(kvm->vm_fd, KVM_GET_PIT2, ps2); - if (r == -1) { - r = -errno; - perror("kvm_get_pit2"); - } - return r; + return kvm_vm_ioctl(kvm_state, KVM_GET_PIT2, ps2); } int kvm_set_pit2(kvm_context_t kvm, struct kvm_pit_state2 *ps2) { - int r; if (!kvm->pit_in_kernel) return 0; - r = ioctl(kvm->vm_fd, KVM_SET_PIT2, ps2); - if (r == -1) { - r = -errno; - perror("kvm_set_pit2"); - } - return r; + return kvm_vm_ioctl(kvm_state, KVM_SET_PIT2, ps2); } #endif @@ -582,10 +557,10 @@ int kvm_set_shadow_pages(kvm_context_t kvm, unsigned int nrshadow_pages) r = ioctl(kvm->fd, KVM_CHECK_EXTENSION, KVM_CAP_MMU_SHADOW_CACHE_CONTROL); if (r > 0) { - r = ioctl(kvm->vm_fd, KVM_SET_NR_MMU_PAGES, nrshadow_pages); - if (r == -1) { + r = kvm_vm_ioctl(kvm_state, KVM_SET_NR_MMU_PAGES, nrshadow_pages); + if (r < 0) { fprintf(stderr, "kvm_set_shadow_pages: %m\n"); - return -e
[PATCH v2 3/5] reuse kvm_ioctl
Start using kvm_ioctl's code. For type safety, delete fd from kvm_context entirely, so the compiler can play along with us helping to detect errors I might have made. Signed-off-by: Glauber Costa Also, we were slightly different from qemu upstream in handling error code from ioctl, since we were always testing for -1, while kvm_vm_ioctl returns -errno. We already did this in most of the call sites, so this patch has the big advantage of simplifying call sites. --- kvm-all.c |2 +- qemu-kvm-x86.c | 37 + qemu-kvm.c | 41 - qemu-kvm.h |3 +-- 4 files changed, 39 insertions(+), 44 deletions(-) diff --git a/kvm-all.c b/kvm-all.c index 9373d99..0ec6475 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -793,6 +793,7 @@ void kvm_set_phys_mem(target_phys_addr_t start_addr, } } +#endif int kvm_ioctl(KVMState *s, int type, ...) { int ret; @@ -809,7 +810,6 @@ int kvm_ioctl(KVMState *s, int type, ...) return ret; } -#endif int kvm_vm_ioctl(KVMState *s, int type, ...) { diff --git a/qemu-kvm-x86.c b/qemu-kvm-x86.c index eec64db..c9f9ac3 100644 --- a/qemu-kvm-x86.c +++ b/qemu-kvm-x86.c @@ -38,7 +38,7 @@ int kvm_set_tss_addr(kvm_context_t kvm, unsigned long addr) #ifdef KVM_CAP_SET_TSS_ADDR int r; - r = ioctl(kvm->fd, KVM_CHECK_EXTENSION, KVM_CAP_SET_TSS_ADDR); + r = kvm_ioctl(kvm_state, KVM_CHECK_EXTENSION, KVM_CAP_SET_TSS_ADDR); if (r > 0) { r = kvm_vm_ioctl(kvm_state, KVM_SET_TSS_ADDR, addr); if (r < 0) { @@ -56,7 +56,7 @@ static int kvm_init_tss(kvm_context_t kvm) #ifdef KVM_CAP_SET_TSS_ADDR int r; - r = ioctl(kvm->fd, KVM_CHECK_EXTENSION, KVM_CAP_SET_TSS_ADDR); + r = kvm_ioctl(kvm_state, KVM_CHECK_EXTENSION, KVM_CAP_SET_TSS_ADDR); if (r > 0) { /* * this address is 3 pages before the bios, and the bios should present @@ -80,7 +80,7 @@ static int kvm_create_pit(kvm_context_t kvm) kvm->pit_in_kernel = 0; if (!kvm->no_pit_creation) { - r = ioctl(kvm->fd, KVM_CHECK_EXTENSION, KVM_CAP_PIT); + r = kvm_ioctl(kvm_state, KVM_CHECK_EXTENSION, KVM_CAP_PIT); if (r > 0) { r = kvm_vm_ioctl(kvm_state, KVM_CREATE_PIT); if (r >= 0) @@ -356,11 +356,11 @@ void kvm_show_code(kvm_vcpu_context_t vcpu) struct kvm_msr_list *kvm_get_msr_list(kvm_context_t kvm) { struct kvm_msr_list sizer, *msrs; - int r, e; + int r; sizer.nmsrs = 0; - r = ioctl(kvm->fd, KVM_GET_MSR_INDEX_LIST, &sizer); - if (r == -1 && errno != E2BIG) + r = kvm_ioctl(kvm_state, KVM_GET_MSR_INDEX_LIST, &sizer); + if (r < 0 && r != -E2BIG) return NULL; /* Old kernel modules had a bug and could write beyond the provided memory. Allocate at least a safe amount of 1K. */ @@ -368,11 +368,10 @@ struct kvm_msr_list *kvm_get_msr_list(kvm_context_t kvm) sizer.nmsrs * sizeof(*msrs->indices))); msrs->nmsrs = sizer.nmsrs; - r = ioctl(kvm->fd, KVM_GET_MSR_INDEX_LIST, msrs); - if (r == -1) { - e = errno; + r = kvm_ioctl(kvm_state, KVM_GET_MSR_INDEX_LIST, msrs); + if (r < 0) { free(msrs); - errno = e; + errno = r; return NULL; } return msrs; @@ -413,10 +412,10 @@ int kvm_get_mce_cap_supported(kvm_context_t kvm, uint64_t *mce_cap, #ifdef KVM_CAP_MCE int r; -r = ioctl(kvm->fd, KVM_CHECK_EXTENSION, KVM_CAP_MCE); +r = kvm_ioctl(kvm_state, KVM_CHECK_EXTENSION, KVM_CAP_MCE); if (r > 0) { *max_banks = r; -return ioctl(kvm->fd, KVM_X86_GET_MCE_CAP_SUPPORTED, mce_cap); +return kvm_ioctl(kvm_state, KVM_X86_GET_MCE_CAP_SUPPORTED, mce_cap); } #endif return -ENOSYS; @@ -554,7 +553,7 @@ int kvm_set_shadow_pages(kvm_context_t kvm, unsigned int nrshadow_pages) #ifdef KVM_CAP_MMU_SHADOW_CACHE_CONTROL int r; - r = ioctl(kvm->fd, KVM_CHECK_EXTENSION, + r = kvm_ioctl(kvm_state, KVM_CHECK_EXTENSION, KVM_CAP_MMU_SHADOW_CACHE_CONTROL); if (r > 0) { r = kvm_vm_ioctl(kvm_state, KVM_SET_NR_MMU_PAGES, nrshadow_pages); @@ -573,7 +572,7 @@ int kvm_get_shadow_pages(kvm_context_t kvm, unsigned int *nrshadow_pages) #ifdef KVM_CAP_MMU_SHADOW_CACHE_CONTROL int r; - r = ioctl(kvm->fd, KVM_CHECK_EXTENSION, + r = kvm_ioctl(kvm_state, KVM_CHECK_EXTENSION, KVM_CAP_MMU_SHADOW_CACHE_CONTROL); if (r > 0) { *nrshadow_pages = kvm_vm_ioctl(kvm_state, KVM_GET_NR_MMU_PAGES); @@ -592,8 +591,8 @@ static int tpr_access_reporting(kvm_vcpu_context_t vcpu, int enabled) .enabled = enabled, }; - r = ioctl(vcpu->kvm->fd,
[PATCH v2 4/5] check extension
use upstream check_extension code Signed-off-by: Glauber Costa --- hw/device-assignment.c |2 +- kvm-all.c |2 ++ qemu-kvm-x86.c |6 +++--- qemu-kvm.c | 18 -- qemu-kvm.h |2 +- 5 files changed, 11 insertions(+), 19 deletions(-) diff --git a/hw/device-assignment.c b/hw/device-assignment.c index 88c3baf..75db546 100644 --- a/hw/device-assignment.c +++ b/hw/device-assignment.c @@ -639,7 +639,7 @@ static int assign_device(AssignedDevInfo *adev) /* We always enable the IOMMU if present * (or when not disabled on the command line) */ -r = kvm_check_extension(kvm_context, KVM_CAP_IOMMU); +r = kvm_check_extension(kvm_state, KVM_CAP_IOMMU); if (r && !adev->disable_iommu) assigned_dev_data.flags |= KVM_DEV_ASSIGN_ENABLE_IOMMU; #endif diff --git a/kvm-all.c b/kvm-all.c index 0ec6475..b4b5a35 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -383,6 +383,7 @@ int kvm_uncoalesce_mmio_region(target_phys_addr_t start, ram_addr_t size) return ret; } +#endif int kvm_check_extension(KVMState *s, unsigned int extension) { int ret; @@ -394,6 +395,7 @@ int kvm_check_extension(KVMState *s, unsigned int extension) return ret; } +#ifdef KVM_UPSTREAM int kvm_init(int smp_cpus) { diff --git a/qemu-kvm-x86.c b/qemu-kvm-x86.c index c9f9ac3..499e305 100644 --- a/qemu-kvm-x86.c +++ b/qemu-kvm-x86.c @@ -303,7 +303,7 @@ int kvm_has_pit_state2(kvm_context_t kvm) int r = 0; #ifdef KVM_CAP_PIT_STATE2 - r = kvm_check_extension(kvm, KVM_CAP_PIT_STATE2); + r = kvm_check_extension(kvm_state, KVM_CAP_PIT_STATE2); #endif return r; } @@ -657,7 +657,7 @@ uint32_t kvm_get_supported_cpuid(kvm_context_t kvm, uint32_t function, int reg) uint32_t ret = 0; uint32_t cpuid_1_edx; - if (!kvm_check_extension(kvm, KVM_CAP_EXT_CPUID)) { + if (!kvm_check_extension(kvm_state, KVM_CAP_EXT_CPUID)) { return -1U; } @@ -1189,7 +1189,7 @@ static int get_para_features(kvm_context_t kvm_context) int i, features = 0; for (i = 0; i < ARRAY_SIZE(para_features)-1; i++) { - if (kvm_check_extension(kvm_context, para_features[i].cap)) + if (kvm_check_extension(kvm_state, para_features[i].cap)) features |= (1 << para_features[i].feature); } diff --git a/qemu-kvm.c b/qemu-kvm.c index 98cfee0..e200dea 100644 --- a/qemu-kvm.c +++ b/qemu-kvm.c @@ -589,16 +589,6 @@ static int kvm_create_default_phys_mem(kvm_context_t kvm, return -1; } -int kvm_check_extension(kvm_context_t kvm, int ext) -{ - int ret; - - ret = kvm_ioctl(kvm_state, KVM_CHECK_EXTENSION, ext); - if (ret > 0) - return ret; - return 0; -} - void kvm_create_irqchip(kvm_context_t kvm) { int r; @@ -1345,7 +1335,7 @@ int kvm_has_gsi_routing(kvm_context_t kvm) int r = 0; #ifdef KVM_CAP_IRQ_ROUTING -r = kvm_check_extension(kvm, KVM_CAP_IRQ_ROUTING); +r = kvm_check_extension(kvm_state, KVM_CAP_IRQ_ROUTING); #endif return r; } @@ -1353,7 +1343,7 @@ int kvm_has_gsi_routing(kvm_context_t kvm) int kvm_get_gsi_count(kvm_context_t kvm) { #ifdef KVM_CAP_IRQ_ROUTING - return kvm_check_extension(kvm, KVM_CAP_IRQ_ROUTING); + return kvm_check_extension(kvm_state, KVM_CAP_IRQ_ROUTING); #else return -EINVAL; #endif @@ -1606,7 +1596,7 @@ int kvm_irqfd(kvm_context_t kvm, int gsi, int flags) int r; int fd; - if (!kvm_check_extension(kvm, KVM_CAP_IRQFD)) + if (!kvm_check_extension(kvm_state, KVM_CAP_IRQFD)) return -ENOENT; fd = eventfd(0, 0); @@ -2381,7 +2371,7 @@ int kvm_setup_guest_memory(void *area, unsigned long size) int kvm_qemu_check_extension(int ext) { -return kvm_check_extension(kvm_context, ext); +return kvm_check_extension(kvm_state, ext); } int kvm_qemu_init_env(CPUState *cenv) diff --git a/qemu-kvm.h b/qemu-kvm.h index 8c9b72f..ec35f29 100644 --- a/qemu-kvm.h +++ b/qemu-kvm.h @@ -167,7 +167,6 @@ int kvm_create(kvm_context_t kvm, unsigned long phys_mem_bytes, void **phys_mem); int kvm_create_vm(kvm_context_t kvm); -int kvm_check_extension(kvm_context_t kvm, int ext); void kvm_create_irqchip(kvm_context_t kvm); /*! @@ -1198,5 +1197,6 @@ extern KVMState *kvm_state; int kvm_ioctl(KVMState *s, int type, ...); int kvm_vm_ioctl(KVMState *s, int type, ...); +int kvm_check_extension(KVMState *s, unsigned int ext); #endif -- 1.6.2.2 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 5/5] use upstream cpuid code
use cpuid code from upstream. By doing that, we lose the following snippet in kvm_get_supported_cpuid(): ret |= 1 << 12; /* MTRR */ ret |= 1 << 16; /* PAT */ ret |= 1 << 7; /* MCE */ ret |= 1 << 14; /* MCA */ A quick search in mailing lists says this code is not really necessary, and we're keeping it just for backwards compatibility. This is not that important, because we'd lose it anyway in the golden day in which we totally merge with qemu. Anyway, if it do _is_ important, we can send a patch to qemu with it. Signed-off-by: Glauber Costa --- qemu-kvm-x86.c| 119 - target-i386/kvm.c |2 + 2 files changed, 2 insertions(+), 119 deletions(-) diff --git a/qemu-kvm-x86.c b/qemu-kvm-x86.c index 499e305..c88d289 100644 --- a/qemu-kvm-x86.c +++ b/qemu-kvm-x86.c @@ -615,106 +615,6 @@ int kvm_disable_tpr_access_reporting(kvm_vcpu_context_t vcpu) #endif -#ifdef KVM_CAP_EXT_CPUID - -static struct kvm_cpuid2 *try_get_cpuid(kvm_context_t kvm, int max) -{ - struct kvm_cpuid2 *cpuid; - int r, size; - - size = sizeof(*cpuid) + max * sizeof(*cpuid->entries); - cpuid = qemu_malloc(size); - cpuid->nent = max; - r = kvm_ioctl(kvm_state, KVM_GET_SUPPORTED_CPUID, cpuid); - if (r == 0 && cpuid->nent >= max) - r = -E2BIG; - if (r < 0) { - if (r == -E2BIG) { - free(cpuid); - return NULL; - } else { - fprintf(stderr, "KVM_GET_SUPPORTED_CPUID failed: %s\n", - strerror(-r)); - exit(1); - } - } - return cpuid; -} - -#define R_EAX 0 -#define R_ECX 1 -#define R_EDX 2 -#define R_EBX 3 -#define R_ESP 4 -#define R_EBP 5 -#define R_ESI 6 -#define R_EDI 7 - -uint32_t kvm_get_supported_cpuid(kvm_context_t kvm, uint32_t function, int reg) -{ - struct kvm_cpuid2 *cpuid; - int i, max; - uint32_t ret = 0; - uint32_t cpuid_1_edx; - - if (!kvm_check_extension(kvm_state, KVM_CAP_EXT_CPUID)) { - return -1U; - } - - max = 1; - while ((cpuid = try_get_cpuid(kvm, max)) == NULL) { - max *= 2; - } - - for (i = 0; i < cpuid->nent; ++i) { - if (cpuid->entries[i].function == function) { - switch (reg) { - case R_EAX: - ret = cpuid->entries[i].eax; - break; - case R_EBX: - ret = cpuid->entries[i].ebx; - break; - case R_ECX: - ret = cpuid->entries[i].ecx; - break; - case R_EDX: - ret = cpuid->entries[i].edx; -if (function == 1) { -/* kvm misreports the following features - */ -ret |= 1 << 12; /* MTRR */ -ret |= 1 << 16; /* PAT */ -ret |= 1 << 7; /* MCE */ -ret |= 1 << 14; /* MCA */ -} - - /* On Intel, kvm returns cpuid according to -* the Intel spec, so add missing bits -* according to the AMD spec: -*/ - if (function == 0x8001) { - cpuid_1_edx = kvm_get_supported_cpuid(kvm, 1, R_EDX); - ret |= cpuid_1_edx & 0xdfeff7ff; - } - break; - } - } - } - - free(cpuid); - - return ret; -} - -#else - -uint32_t kvm_get_supported_cpuid(kvm_context_t kvm, uint32_t function, int reg) -{ - return -1U; -} - -#endif int kvm_qemu_create_memory_alias(uint64_t phys_start, uint64_t len, uint64_t target_phys) @@ -1196,19 +1096,6 @@ static int get_para_features(kvm_context_t kvm_context) return features; } -static void kvm_trim_features(uint32_t *features, uint32_t supported) -{ -int i; -uint32_t mask; - -for (i = 0; i < 32; ++i) { -mask = 1U << i; -if ((*features & mask) && !(supported & mask)) { -*features &= ~mask; -} -} -} - int kvm_arch_qemu_init_env(CPUState *cenv) { struct kvm_cpuid_entry2 cpuid_ent[100]; @@ -1626,12 +1513,6 @@ int kvm_arch_init_irq_routing(void) return 0; } -uint32_t kvm_arch_get_supported_cpuid(CPUState *env, uint32_t function, -
Re: [Autotest] [RFC] KVM test: Refactoring the kvm control file and the config file
> The advantages I see are: 1. it more closely follows the current > autotest structure/layout, 2. solves the problem of separating each test > out of the ever growing kvm_test.py and gives a sub dir of each test for > better structure (something we have been talking about) and 3. addresses > the config vs. control file ? that this thread originally brought up. > > I think the issue is in how the "kvm test" is viewed. Is it one test > that gets run against several configurations, or is it several different > tests with different configurations?. I have been looking at it as the > later however I do also see it the other way as well. I think if you try to force everything you do into one test, you'll lose a lot of the power and flexibility of the system. I can't claim to have entirely figured out what you're doing, but it seems somewhat like you're reinventing some stuff with the current approach? Some of the general design premises: 1) Anything the user might want to configure should be in the control file 2) Anything in test should be really pretty static. 3) The way we get around a lot of the conflicts is by passing parameters to run_test, though leaving sensible defaults in for them makes things much easier to use. 4) The frontend and cli are designed to allow you to edit control files, and/or save custom versions - that's the single object we throw to machines under test ... there's no passing of cfg files to clients? We often end up with longer control files that contain a pre-canned set of tests, and even "meta-control files" that kick off a multitude of jobs across thousands of machines, using frontend.py. That can include control flow - for example our internal kernel testing uses a waterfall model with several steps: 1. Compile the kernel from source 2. Test on a bunch of single machines with a smoketest that takes an hour or so. 3. Test on small groups of machines with cut down simulations of cluster tests 4. Test on full clusters. If any of those tests fails (with some built in fault tolerance for a small hardware fallout rate), we stop the testing. All of that control flow is governed by a control file. It sounds complex, but it's really not if you build your "building blocks" carefully, and it's extremely powerful > So maybe the solution is a little different than my first thought > > - all kvm tests are in $AUTOTEST/client/kvm_tests/ > - all kvm tests inherent form $AUTOTEST/client/common_lib/kvm_test.py > - common functionality is in $AUTOTEST/client/common_lib/kvm_test_utils/ > - does *not* include generic kvm_test.cfg > - we keep the $AUTOTEST/client/kvm/ test dir which defines the test runs > and houses kvm_test.cfg file and a master control. > - we could then define a couple sample test runs: full, quick, and > others or implement something like your kvm_tests.common file that > other test runs can build on. Are all of your tests exclusive to KVM? I would think you'd want to be able to run any "normal" test inside a KVM environment too? -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] qemu-kvm: reserve the low 24 gsi values
On Tue, Jul 21, 2009 at 06:57:42PM +0300, Michael S. Tsirkin wrote: > reserve gsi 0 to 23 so that they won't be allocated for msi > In the not so distant future we may want to support more then one ioapic, so 23 will became 23*n where n is a number of ioapics. I prefer to fix it by moving msi injection to its own ioctl. But for now may be we can scan used_gsi_bitmap from the end during allocation of gsi for msi? > Signed-off-by: Michael S. Tsirkin > > --- > > diff --git a/qemu-kvm.c b/qemu-kvm.c > index c6c9fc6..f440b2d 100644 > --- a/qemu-kvm.c > +++ b/qemu-kvm.c > @@ -1613,10 +1613,12 @@ int kvm_get_irq_route_gsi(kvm_context_t kvm) > { > int i, bit; > uint32_t *buf = kvm->used_gsi_bitmap; > + uint32_t mask = 0xff00; > > /* Return the lowest unused GSI in the bitmap */ > - for (i = 0; i < kvm->max_gsi / 32; i++) { > - bit = ffs(~buf[i]); > + for (i = 0; i < kvm->max_gsi / 32; i++) { > + bit = ffs(~buf[i] & mask); > + mask = 0x; > if (!bit) > continue; > -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 02/11] Unregister ack notifier callback on PIT freeing.
On Thu, Jul 16, 2009 at 05:03:30PM +0300, Gleb Natapov wrote: > > Signed-off-by: Gleb Natapov > --- > arch/x86/kvm/i8254.c |2 ++ > 1 files changed, 2 insertions(+), 0 deletions(-) > > diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c > index 137e548..472653c 100644 > --- a/arch/x86/kvm/i8254.c > +++ b/arch/x86/kvm/i8254.c > @@ -672,6 +672,8 @@ void kvm_free_pit(struct kvm *kvm) > if (kvm->arch.vpit) { > kvm_unregister_irq_mask_notifier(kvm, 0, > &kvm->arch.vpit->mask_notifier); > + kvm_unregister_irq_ack_notifier(kvm, > + &kvm->arch.vpit->pit_state.irq_ack_notifier); > mutex_lock(&kvm->arch.vpit->pit_state.lock); > timer = &kvm->arch.vpit->pit_state.pit_timer.timer; > hrtimer_cancel(timer); Applied this one. I suppose you're reworking the lockless patchset to include the PIC irq ack changes? (as discussed the pic_unlock trick with vcpu_kick is not necessary anymore etc). Just please if you have any fixes, send them separately so backporting to -stable is easier. Thanks -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Autotest] [KVM-AUTOTEST PATCH 12/17] KVM test: add simple timedrift test (mainly for Windows)
On Tue, Jul 21, 2009 at 12:23:15PM +0300, Dor Laor wrote: > On 07/20/2009 06:07 PM, Michael Goldish wrote: >> 1) Log into a guest. >> 2) Take a time reading from the guest and host. >> 3) Run load on the guest and host. >> 4) Take a second time reading. >> 5) Stop the load and rest for a while. >> 6) Take a third time reading. >> 7) If the drift immediately after load is higher than a user- >> specified value (in %), fail. >> If the drift after the rest period is higher than a user-specified value, >> fail. >> >> Signed-off-by: Michael Goldish >> --- >> client/tests/kvm/kvm.py |1 + >> client/tests/kvm/kvm_tests.py | 161 >> - >> 2 files changed, 160 insertions(+), 2 deletions(-) >> >> diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py >> index b18b643..070e463 100644 >> --- a/client/tests/kvm/kvm.py >> +++ b/client/tests/kvm/kvm.py >> @@ -55,6 +55,7 @@ class kvm(test.test): >> "kvm_install": test_routine("kvm_install", >> "run_kvm_install"), >> "linux_s3": test_routine("kvm_tests", "run_linux_s3"), >> "stress_boot": test_routine("kvm_tests", >> "run_stress_boot"), >> +"timedrift":test_routine("kvm_tests", "run_timedrift"), >> } >> >> # Make it possible to import modules from the test's bindir >> diff --git a/client/tests/kvm/kvm_tests.py b/client/tests/kvm/kvm_tests.py >> index 5991aed..ca0b8c0 100644 >> --- a/client/tests/kvm/kvm_tests.py >> +++ b/client/tests/kvm/kvm_tests.py >> @@ -1,4 +1,4 @@ >> -import time, os, logging >> +import time, os, logging, re, commands >> from autotest_lib.client.common_lib import utils, error >> import kvm_utils, kvm_subprocess, ppm_utils, scan_results >> >> @@ -529,7 +529,6 @@ def run_stress_boot(tests, params, env): >> """ >> # boot the first vm >> vm = kvm_utils.env_get_vm(env, params.get("main_vm")) >> - >> if not vm: >> raise error.TestError("VM object not found in environment") >> if not vm.is_alive(): >> @@ -586,3 +585,161 @@ def run_stress_boot(tests, params, env): >> for se in sessions: >> se.close() >> logging.info("Total number booted: %d" % (num -1)) >> + >> + >> +def run_timedrift(test, params, env): >> +""" >> +Time drift test (mainly for Windows guests): >> + >> +1) Log into a guest. >> +2) Take a time reading from the guest and host. >> +3) Run load on the guest and host. >> +4) Take a second time reading. >> +5) Stop the load and rest for a while. >> +6) Take a third time reading. >> +7) If the drift immediately after load is higher than a user- >> +specified value (in %), fail. >> +If the drift after the rest period is higher than a user-specified >> value, >> +fail. >> + >> +@param test: KVM test object. >> +@param params: Dictionary with test parameters. >> +@param env: Dictionary with the test environment. >> +""" >> +vm = kvm_utils.env_get_vm(env, params.get("main_vm")) >> +if not vm: >> +raise error.TestError("VM object not found in environment") >> +if not vm.is_alive(): >> +raise error.TestError("VM seems to be dead; Test requires a living >> VM") >> + >> +logging.info("Waiting for guest to be up...") >> + >> +session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2) >> +if not session: >> +raise error.TestFail("Could not log into guest") >> + >> +logging.info("Logged in") >> + >> +# Collect test parameters: >> +# Command to run to get the current time >> +time_command = params.get("time_command") >> +# Filter which should match a string to be passed to time.strptime() >> +time_filter_re = params.get("time_filter_re") >> +# Time format for time.strptime() >> +time_format = params.get("time_format") >> +guest_load_command = params.get("guest_load_command") >> +guest_load_stop_command = params.get("guest_load_stop_command") >> +host_load_command = params.get("host_load_command") >> +guest_load_instances = int(params.get("guest_load_instances", "1")) >> +host_load_instances = int(params.get("host_load_instances", "0")) >> +# CPU affinity mask for taskset >> +cpu_mask = params.get("cpu_mask", "0xFF") >> +load_duration = float(params.get("load_duration", "30")) >> +rest_duration = float(params.get("rest_duration", "10")) >> +drift_threshold = float(params.get("drift_threshold", "200")) >> +drift_threshold_after_rest = >> float(params.get("drift_threshold_after_rest", >> + "200")) >> + >> +guest_load_sessions = [] >> +host_load_sessions = [] >> + >> +# Remember the VM's previous CPU affinity >> +prev_cpu_mask = commands.getoutput("taskset -p %s" % vm.get_pid()) >> +prev_cpu_mask = prev_cpu_mask.split()[-1] >> +# Set the VM's CPU affinity >> +co
Re: [PATCH 02/11] Unregister ack notifier callback on PIT freeing.
On Tue, Jul 21, 2009 at 02:16:43PM -0300, Marcelo Tosatti wrote: > On Thu, Jul 16, 2009 at 05:03:30PM +0300, Gleb Natapov wrote: > > > > Signed-off-by: Gleb Natapov > > --- > > arch/x86/kvm/i8254.c |2 ++ > > 1 files changed, 2 insertions(+), 0 deletions(-) > > > > diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c > > index 137e548..472653c 100644 > > --- a/arch/x86/kvm/i8254.c > > +++ b/arch/x86/kvm/i8254.c > > @@ -672,6 +672,8 @@ void kvm_free_pit(struct kvm *kvm) > > if (kvm->arch.vpit) { > > kvm_unregister_irq_mask_notifier(kvm, 0, > >&kvm->arch.vpit->mask_notifier); > > + kvm_unregister_irq_ack_notifier(kvm, > > + &kvm->arch.vpit->pit_state.irq_ack_notifier); > > mutex_lock(&kvm->arch.vpit->pit_state.lock); > > timer = &kvm->arch.vpit->pit_state.pit_timer.timer; > > hrtimer_cancel(timer); > > Applied this one. > > I suppose you're reworking the lockless patchset to include the PIC irq > ack changes? (as discussed the pic_unlock trick with vcpu_kick is not > necessary anymore etc). > Fixed all of this already. Want them as separate patch series? I did them on my irq branch but they are good for master too. > Just please if you have any fixes, send them separately so backporting > to -stable is easier. > -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Autotest] [KVM-AUTOTEST PATCH 15/17] KVM test: add timedrift test to kvm_tests.cfg.sample
On Tue, Jul 21, 2009 at 06:41:09AM -0400, Michael Goldish wrote: > > - "Dor Laor" wrote: > > > On 07/20/2009 06:07 PM, Michael Goldish wrote: > > > Currently the test will only run on Windows. > > > It should be able to run on Linux just as well, but if I understand > > correctly, > > > testing time drift on Linux is less interesting. > > > > Linux is interesting too. The problem is more visible on windows > > since > > it uses 1000hz frequency when it plays multimedia. It makes timer irq > > injection harder. > > If I understand correctly, most Linuxes don't use RTC at all (please > correct me if I'm wrong). This means there's no point in testing > time drift on Linux, because even if there's any drift, it won't get > corrected by -rtc-td-hack. And it's pretty hard to get a drift on > RHEL-3.9 for example -- at least it was very hard for me. https://bugzilla.redhat.com/show_bug.cgi?id=507834 for example. Also we'd like to test different clocks. For example with RHEL5 you can choose, via a kernel boot parameter the following clocks: clock= [BUGS=IA-32, HW] gettimeofday clocksource override. [Deprecated] Forces specified clocksource (if avaliable) to be used when calculating gettimeofday(). If specified clocksource is not avalible, it defaults to PIT. Format: { pit | tsc | cyclone | pmtmr } >From Documentation/kernel-parameters.txt file of the 2.6.18 kernel. Passing options to the guest kernel is also required for other things, and perhaps there should be a generic mechanism to do it. > > Does the test fail without the rtc-td-hack? > > The problem with the test is that it's hard to decide on the drift > thresholds for failure, because the more load you use, the larger the > drift you get. > -rtc-td-hack makes it harder to get a drift -- you need to add more load > in order to get the same drift. > However, in my experiments, when I got a drift, it was not corrected when > the load stopped. If I get 5 seconds of drift during load, and then I > stop the load and wait, the drift remains 5 seconds, which makes me think > I may be doing something wrong. I never got to see the cool fast rotating > clock either. > Another weird thing I noticed was that the drift was much larger when the > VM and load were NOT pinned to a single CPU. It could cause a leap from 5% > to 30%. (my office desktop has 2 CPUs.) > I used Vista with kvm-85 I think. I tried both video load (VLC) and dir /s. > Even if I did something wrong, I hope the test itself is OK, because its > behavior is completely configurable. > > > > > > > Also make some tiny cosmetic changes (spacing), and move the > > stress_boot test > > > before the shutdown test (shutdown should be last). > > > > > > Signed-off-by: Michael Goldish > > > --- > > > client/tests/kvm/kvm_tests.cfg.sample | 46 > > ++-- > > > 1 files changed, 37 insertions(+), 9 deletions(-) > > > > > > diff --git a/client/tests/kvm/kvm_tests.cfg.sample > > b/client/tests/kvm/kvm_tests.cfg.sample > > > index 1288952..2d75a66 100644 > > > --- a/client/tests/kvm/kvm_tests.cfg.sample > > > +++ b/client/tests/kvm/kvm_tests.cfg.sample > > > @@ -92,20 +92,33 @@ variants: > > > test_name = disktest > > > test_control_file = disktest.control > > > > > > -- linux_s3: install setup > > > +- linux_s3: install setup > > > type = linux_s3 > > > > > > -- shutdown: install setup > > > +- timedrift:install setup > > > +type = timedrift > > > +extra_params += " -rtc-td-hack" > > > +# Pin the VM and host load to CPU #0 > > > +cpu_mask = 0x1 > > > +# Set the load and rest durations > > > +load_duration = 20 > > > +rest_duration = 20 > > > +# Fail if the drift after load is higher than 50% > > > +drift_threshold = 50 > > > +# Fail if the drift after the rest period is higher than > > 10% > > > +drift_threshold_after_rest = 10 > > > + > > > +- stress_boot: install setup > > > +type = stress_boot > > > +max_vms = 5 > > > +alive_test_cmd = ps aux > > > + > > > +- shutdown: install setup > > > type = shutdown > > > kill_vm = yes > > > kill_vm_gracefully = no > > > > > > > > > -- stress_boot: > > > -type = stress_boot > > > -max_vms = 5 > > > -alive_test_cmd = ps aux > > > - > > > # NICs > > > variants: > > > - @rtl8139: > > > @@ -121,6 +134,7 @@ variants: > > > variants: > > > # Linux section > > > - @Linux: > > > +no timedrift > > > cmd_shutdown = shutdown -h now > > > cmd_reboot = shutdown -r now > > > ssh_status_test_command = echo $? > > > @@ -303,8 +317,6 @@ variants: > > > > > md5sum=
Re: qemu-kvm missing some msix capability check
On Tue, Jul 21, 2009 at 10:42:19PM +0530, Amit Shah wrote: > On (Tue) Jul 21 2009 [19:54:00], Michael S. Tsirkin wrote: > > On Fri, Jul 17, 2009 at 06:34:40PM +0530, Amit Shah wrote: > > > Hello, > > > > > > Using recent qemu-kvm userspace with a slightly older kernel module I > > > get this when using the virtio-net device: > > > > > > kvm_msix_add: kvm_get_irq_route_gsi failed: No space left on device > > > > > > ... and the guest doesn't use the net device. > > > > > > This goes away when using a newer kvm module. > > > > > > Amit > > > > Could you verify if the following helps please? > > What is this based on? Fails to apply on kvm/master. > > Amit 84a3c0818fe9d7a1e34c188d6182793f213a6a66 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: qemu-kvm missing some msix capability check
On Tue, Jul 21, 2009 at 10:42:19PM +0530, Amit Shah wrote: > On (Tue) Jul 21 2009 [19:54:00], Michael S. Tsirkin wrote: > > On Fri, Jul 17, 2009 at 06:34:40PM +0530, Amit Shah wrote: > > > Hello, > > > > > > Using recent qemu-kvm userspace with a slightly older kernel module I > > > get this when using the virtio-net device: > > > > > > kvm_msix_add: kvm_get_irq_route_gsi failed: No space left on device > > > > > > ... and the guest doesn't use the net device. > > > > > > This goes away when using a newer kvm module. > > > > > > Amit > > > > Could you verify if the following helps please? > > What is this based on? Fails to apply on kvm/master. > > Amit Sorry, this is on top of 2 patches I just posted that fix othe rbugs in virtio. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] KVM: VMX: Fix locking imbalance on emulation failure
On Tue, Jul 21, 2009 at 10:43:07AM +0200, Jan Kiszka wrote: > We have to disable preemption and IRQs on every exit from > handle_invalid_guest_state, otherwise we generate at least a > preempt_disable imbalance. > > Signed-off-by: Jan Kiszka > --- > > arch/x86/kvm/vmx.c |2 +- > 1 files changed, 1 insertions(+), 1 deletions(-) > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index 3a75db3..7a8d464 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -3335,7 +3335,7 @@ static void handle_invalid_guest_state(struct kvm_vcpu > *vcpu, > > if (err != EMULATE_DONE) { > kvm_report_emulation_failure(vcpu, "emulation failure"); > - return; > + break; > } > > if (signal_pending(current)) Applied, thanks. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
VGA address mapping?
Hi I am implementing a VGA Device model. The model provides functions to read/write VGA memory space. Just for testing I want to capture memory reads/writes to addresses 0xA->0xC and forward it to my VGA model. I have used following function to create physical ram int kvm_create ( kvm_context_t kvm, unsigned long phys_mem_bytes, void ** phys_mem ) The function comments says that this creates a new virtual machine, maps physical RAM to it, and creates a virtual CPU for it. Memory gets mapped for addresses 0->0xA, 0xC->phys_mem_bytes. I was expecting mimio read/write callbacks to capture transactions between 0xA->0xC but I don't see that happening. My question is how I can configure KVM to forward me reads/writes for VGA address space? Thanks Abhishek -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] qemu-kvm: reserve the low 24 gsi values
On Tue, Jul 21, 2009 at 08:21:15PM +0300, Gleb Natapov wrote: > On Tue, Jul 21, 2009 at 06:57:42PM +0300, Michael S. Tsirkin wrote: > > reserve gsi 0 to 23 so that they won't be allocated for msi > > > In the not so distant future we may want to support more then one > ioapic, so 23 will became 23*n where n is a number of ioapics. Hmm, n is not limited here, is it? > I prefer > to fix it by moving msi injection to its own ioctl. We will have to figure out how to do this in a backwards-compatible way. > But for now may be > we can scan used_gsi_bitmap from the end during allocation of gsi for > msi? Right, but I think we also want to fail allocations with number < 24 at least for now? > > Signed-off-by: Michael S. Tsirkin > > > > --- > > > > diff --git a/qemu-kvm.c b/qemu-kvm.c > > index c6c9fc6..f440b2d 100644 > > --- a/qemu-kvm.c > > +++ b/qemu-kvm.c > > @@ -1613,10 +1613,12 @@ int kvm_get_irq_route_gsi(kvm_context_t kvm) > > { > > int i, bit; > > uint32_t *buf = kvm->used_gsi_bitmap; > > + uint32_t mask = 0xff00; > > > > /* Return the lowest unused GSI in the bitmap */ > > - for (i = 0; i < kvm->max_gsi / 32; i++) { > > - bit = ffs(~buf[i]); > > + for (i = 0; i < kvm->max_gsi / 32; i++) { > > + bit = ffs(~buf[i] & mask); > > + mask = 0x; > > if (!bit) > > continue; > > > > -- > Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 0/6]
Marcelo, since you told me that you were not confrotable with the --enable-kvm change without avi's ack, I'm dropping it, and sending the rest of the series again. On top of that, I'm also including some more cleanup patches: one removes kvm_out{b,w,l}, in the same way I've already done with kvm_in{b,w,l}, and the other removes specific mmio functions. Thanks! -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 1/6] remove kvm_in* functions
We can use plain qemu's here, and save a couple of lines/complexity. I'm leaving outb for later, because the SMM thing makes it a little bit less trivial. Signed-off-by: Glauber Costa --- qemu-kvm.c | 25 - 1 files changed, 4 insertions(+), 21 deletions(-) diff --git a/qemu-kvm.c b/qemu-kvm.c index 3c892e6..0f5f14f 100644 --- a/qemu-kvm.c +++ b/qemu-kvm.c @@ -97,24 +97,6 @@ static int kvm_debug(void *opaque, void *data, } #endif -static int kvm_inb(void *opaque, uint16_t addr, uint8_t *data) -{ -*data = cpu_inb(0, addr); -return 0; -} - -static int kvm_inw(void *opaque, uint16_t addr, uint16_t *data) -{ -*data = cpu_inw(0, addr); -return 0; -} - -static int kvm_inl(void *opaque, uint16_t addr, uint32_t *data) -{ -*data = cpu_inl(0, addr); -return 0; -} - #define PM_IO_BASE 0xb000 static int kvm_outb(void *opaque, uint16_t addr, uint8_t data) @@ -853,15 +835,16 @@ static int handle_io(kvm_vcpu_context_t vcpu) for (i = 0; i < run->io.count; ++i) { switch (run->io.direction) { case KVM_EXIT_IO_IN: + r = 0; switch (run->io.size) { case 1: - r = kvm_inb(kvm->opaque, addr, p); + *(uint8_t *)p = cpu_inb(kvm->opaque, addr); break; case 2: - r = kvm_inw(kvm->opaque, addr, p); + *(uint16_t *)p = cpu_inw(kvm->opaque, addr); break; case 4: - r = kvm_inl(kvm->opaque, addr, p); + *(uint32_t *)p = cpu_inl(kvm->opaque, addr); break; default: fprintf(stderr, "bad I/O size %d\n", run->io.size); -- 1.6.2.2 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 2/6] reuse env stop and stopped states
qemu CPUState already provides "stop" and "stopped" states. And they mean exactly that. There is no need for us to provide our own. Signed-off-by: Glauber Costa --- cpu-defs.h |2 -- qemu-kvm.c | 30 -- vl.c |2 +- 3 files changed, 13 insertions(+), 21 deletions(-) diff --git a/cpu-defs.h b/cpu-defs.h index 7570096..fce366f 100644 --- a/cpu-defs.h +++ b/cpu-defs.h @@ -142,8 +142,6 @@ struct qemu_work_item; struct KVMCPUState { pthread_t thread; int signalled; -int stop; -int stopped; int created; void *vcpu_ctx; struct qemu_work_item *queued_work_first, *queued_work_last; diff --git a/qemu-kvm.c b/qemu-kvm.c index 0f5f14f..8eeace4 100644 --- a/qemu-kvm.c +++ b/qemu-kvm.c @@ -91,7 +91,7 @@ static int kvm_debug(void *opaque, void *data, if (handle) { kvm_debug_cpu_requested = env; - env->kvm_cpu_state.stopped = 1; + env->stopped = 1; } return handle; } @@ -977,7 +977,7 @@ int handle_halt(kvm_vcpu_context_t vcpu) int handle_shutdown(kvm_context_t kvm, CPUState *env) { /* stop the current vcpu from going back to guest mode */ -env->kvm_cpu_state.stopped = 1; +env->stopped = 1; qemu_system_reset_request(); return 1; @@ -1815,7 +1815,7 @@ int kvm_cpu_exec(CPUState *env) static int is_cpu_stopped(CPUState *env) { -return !vm_running || env->kvm_cpu_state.stopped; +return !vm_running || env->stopped; } static void flush_queued_work(CPUState *env) @@ -1861,9 +1861,9 @@ static void kvm_main_loop_wait(CPUState *env, int timeout) cpu_single_env = env; flush_queued_work(env); -if (env->kvm_cpu_state.stop) { - env->kvm_cpu_state.stop = 0; - env->kvm_cpu_state.stopped = 1; +if (env->stop) { + env->stop = 0; + env->stopped = 1; pthread_cond_signal(&qemu_pause_cond); } @@ -1875,7 +1875,7 @@ static int all_threads_paused(void) CPUState *penv = first_cpu; while (penv) { -if (penv->kvm_cpu_state.stop) +if (penv->stop) return 0; penv = (CPUState *)penv->next_cpu; } @@ -1889,11 +1889,11 @@ static void pause_all_threads(void) while (penv) { if (penv != cpu_single_env) { -penv->kvm_cpu_state.stop = 1; +penv->stop = 1; pthread_kill(penv->kvm_cpu_state.thread, SIG_IPI); } else { -penv->kvm_cpu_state.stop = 0; -penv->kvm_cpu_state.stopped = 1; +penv->stop = 0; +penv->stopped = 1; cpu_exit(penv); } penv = (CPUState *)penv->next_cpu; @@ -1910,8 +1910,8 @@ static void resume_all_threads(void) assert(!cpu_single_env); while (penv) { -penv->kvm_cpu_state.stop = 0; -penv->kvm_cpu_state.stopped = 0; +penv->stop = 0; +penv->stopped = 0; pthread_kill(penv->kvm_cpu_state.thread, SIG_IPI); penv = (CPUState *)penv->next_cpu; } @@ -2676,12 +2676,6 @@ int kvm_log_stop(target_phys_addr_t phys_addr, target_phys_addr_t len) return 0; } -void qemu_kvm_cpu_stop(CPUState *env) -{ -if (kvm_enabled()) -env->kvm_cpu_state.stopped = 1; -} - int kvm_set_boot_cpu_id(uint32_t id) { return kvm_set_boot_vcpu_id(kvm_context, id); diff --git a/vl.c b/vl.c index b3df596..6ef7690 100644 --- a/vl.c +++ b/vl.c @@ -3553,7 +3553,7 @@ void qemu_system_reset_request(void) reset_requested = 1; } if (cpu_single_env) { -qemu_kvm_cpu_stop(cpu_single_env); +cpu_single_env->stopped = 1; } qemu_notify_event(); } -- 1.6.2.2 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 4/6] remove created from kvm_state
Again, CPUState has it, and it means exactly that. Signed-off-by: Glauber Costa --- cpu-defs.h |1 - qemu-kvm.c | 10 +- 2 files changed, 5 insertions(+), 6 deletions(-) diff --git a/cpu-defs.h b/cpu-defs.h index fce366f..ce9f96a 100644 --- a/cpu-defs.h +++ b/cpu-defs.h @@ -142,7 +142,6 @@ struct qemu_work_item; struct KVMCPUState { pthread_t thread; int signalled; -int created; void *vcpu_ctx; struct qemu_work_item *queued_work_first, *queued_work_last; }; diff --git a/qemu-kvm.c b/qemu-kvm.c index a8298e4..cef522d 100644 --- a/qemu-kvm.c +++ b/qemu-kvm.c @@ -1727,12 +1727,12 @@ void kvm_update_interrupt_request(CPUState *env) int signal = 0; if (env) { -if (!current_env || !current_env->kvm_cpu_state.created) +if (!current_env || !current_env->created) signal = 1; /* * Testing for created here is really redundant */ -if (current_env && current_env->kvm_cpu_state.created && +if (current_env && current_env->created && env != current_env && !env->kvm_cpu_state.signalled) signal = 1; @@ -2012,7 +2012,7 @@ static void *ap_main_loop(void *_env) /* signal VCPU creation */ pthread_mutex_lock(&qemu_mutex); -current_env->kvm_cpu_state.created = 1; +current_env->created = 1; pthread_cond_signal(&qemu_vcpu_cond); /* and wait for machine initialization */ @@ -2028,13 +2028,13 @@ void kvm_init_vcpu(CPUState *env) { pthread_create(&env->kvm_cpu_state.thread, NULL, ap_main_loop, env); -while (env->kvm_cpu_state.created == 0) +while (env->created == 0) qemu_cond_wait(&qemu_vcpu_cond); } int kvm_vcpu_inited(CPUState *env) { -return env->kvm_cpu_state.created; +return env->created; } #ifdef TARGET_I386 -- 1.6.2.2 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 5/6] remove kvm_specific kvm_out* functions
As example of what was already done with inb. This is a little bit more tricky, because of SMM, but those bits are handled directly in apic anyway. Signed-off-by: Glauber Costa --- qemu-kvm.c | 60 +++- 1 files changed, 3 insertions(+), 57 deletions(-) diff --git a/qemu-kvm.c b/qemu-kvm.c index cef522d..0724c28 100644 --- a/qemu-kvm.c +++ b/qemu-kvm.c @@ -95,55 +95,6 @@ static int kvm_debug(void *opaque, void *data, } #endif -#define PM_IO_BASE 0xb000 - -static int kvm_outb(void *opaque, uint16_t addr, uint8_t data) -{ -if (addr == 0xb2) { - switch (data) { - case 0: { - cpu_outb(0, 0xb3, 0); - break; - } - case 0xf0: { - unsigned x; - - /* enable acpi */ - x = cpu_inw(0, PM_IO_BASE + 4); - x &= ~1; - cpu_outw(0, PM_IO_BASE + 4, x); - break; - } - case 0xf1: { - unsigned x; - - /* enable acpi */ - x = cpu_inw(0, PM_IO_BASE + 4); - x |= 1; - cpu_outw(0, PM_IO_BASE + 4, x); - break; - } - default: - break; - } - return 0; -} -cpu_outb(0, addr, data); -return 0; -} - -static int kvm_outw(void *opaque, uint16_t addr, uint16_t data) -{ -cpu_outw(0, addr, data); -return 0; -} - -static int kvm_outl(void *opaque, uint16_t addr, uint32_t data) -{ -cpu_outl(0, addr, data); -return 0; -} - int kvm_mmio_read(void *opaque, uint64_t addr, uint8_t *data, int len) { cpu_physical_memory_rw(addr, data, len, 0); @@ -825,14 +776,12 @@ static int handle_io(kvm_vcpu_context_t vcpu) struct kvm_run *run = vcpu->run; kvm_context_t kvm = vcpu->kvm; uint16_t addr = run->io.port; - int r; int i; void *p = (void *)run + run->io.data_offset; for (i = 0; i < run->io.count; ++i) { switch (run->io.direction) { case KVM_EXIT_IO_IN: - r = 0; switch (run->io.size) { case 1: *(uint8_t *)p = cpu_inb(kvm->opaque, addr); @@ -851,16 +800,13 @@ static int handle_io(kvm_vcpu_context_t vcpu) case KVM_EXIT_IO_OUT: switch (run->io.size) { case 1: - r = kvm_outb(kvm->opaque, addr, -*(uint8_t *)p); +cpu_outb(kvm->opaque, addr, *(uint8_t *)p); break; case 2: - r = kvm_outw(kvm->opaque, addr, -*(uint16_t *)p); + cpu_outw(kvm->opaque, addr, *(uint16_t *)p); break; case 4: - r = kvm_outl(kvm->opaque, addr, -*(uint32_t *)p); + cpu_outl(kvm->opaque, addr, *(uint32_t *)p); break; default: fprintf(stderr, "bad I/O size %d\n", run->io.size); -- 1.6.2.2 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 6/6] remove kvm_mmio_read and kvm_mmio_write
all they did was to call a qemu function. Call this function instead. Signed-off-by: Glauber Costa --- qemu-kvm-x86.c |7 +-- qemu-kvm.c | 34 -- 2 files changed, 9 insertions(+), 32 deletions(-) diff --git a/qemu-kvm-x86.c b/qemu-kvm-x86.c index 350f272..741ae0a 100644 --- a/qemu-kvm-x86.c +++ b/qemu-kvm-x86.c @@ -344,7 +344,6 @@ void kvm_show_code(kvm_vcpu_context_t vcpu) unsigned char code; char code_str[SHOW_CODE_LEN * 3 + 1]; unsigned long rip; - kvm_context_t kvm = vcpu->kvm; r = ioctl(fd, KVM_GET_SREGS, &sregs); if (r == -1) { @@ -364,11 +363,7 @@ void kvm_show_code(kvm_vcpu_context_t vcpu) for (n = -back_offset; n < SHOW_CODE_LEN-back_offset; ++n) { if (n == 0) strcat(code_str, " -->"); - r = kvm_mmio_read(kvm->opaque, rip + n, &code, 1); - if (r < 0) { - strcat(code_str, " xx"); - continue; - } + cpu_physical_memory_rw(rip + n, &code, 1, 0); sprintf(code_str + strlen(code_str), " %02x", code); } fprintf(stderr, "code:%s\n", code_str); diff --git a/qemu-kvm.c b/qemu-kvm.c index 0724c28..9b1c506 100644 --- a/qemu-kvm.c +++ b/qemu-kvm.c @@ -95,18 +95,6 @@ static int kvm_debug(void *opaque, void *data, } #endif -int kvm_mmio_read(void *opaque, uint64_t addr, uint8_t *data, int len) -{ - cpu_physical_memory_rw(addr, data, len, 0); - return 0; -} - -int kvm_mmio_write(void *opaque, uint64_t addr, uint8_t *data, int len) -{ - cpu_physical_memory_rw(addr, data, len, 1); - return 0; -} - static int handle_unhandled(kvm_context_t kvm, kvm_vcpu_context_t vcpu, uint64_t reason) { @@ -888,23 +876,17 @@ int kvm_set_mpstate(kvm_vcpu_context_t vcpu, struct kvm_mp_state *mp_state) } #endif -static int handle_mmio(kvm_vcpu_context_t vcpu) +static void handle_mmio(kvm_vcpu_context_t vcpu) { unsigned long addr = vcpu->run->mmio.phys_addr; - kvm_context_t kvm = vcpu->kvm; struct kvm_run *kvm_run = vcpu->run; void *data = kvm_run->mmio.data; /* hack: Red Hat 7.1 generates these weird accesses. */ if ((addr > 0xa-4 && addr <= 0xa) && kvm_run->mmio.len == 3) - return 0; + return; - if (kvm_run->mmio.is_write) - return kvm_mmio_write(kvm->opaque, addr, data, - kvm_run->mmio.len); - else - return kvm_mmio_read(kvm->opaque, addr, data, - kvm_run->mmio.len); +cpu_physical_memory_rw(addr, data, kvm_run->mmio.len, kvm_run->mmio.is_write); } int handle_io_window(kvm_context_t kvm) @@ -991,10 +973,9 @@ again: struct kvm_coalesced_mmio_ring *ring = (void *)run + kvm->coalesced_mmio * PAGE_SIZE; while (ring->first != ring->last) { - kvm_mmio_write(kvm->opaque, -ring->coalesced_mmio[ring->first].phys_addr, - &ring->coalesced_mmio[ring->first].data[0], -ring->coalesced_mmio[ring->first].len); +cpu_physical_memory_rw(ring->coalesced_mmio[ring->first].phys_addr, + &ring->coalesced_mmio[ring->first].data[0], + ring->coalesced_mmio[ring->first].len, 1); smp_wmb(); ring->first = (ring->first + 1) % KVM_COALESCED_MMIO_MAX; @@ -1033,7 +1014,8 @@ again: r = handle_debug(vcpu, env); break; case KVM_EXIT_MMIO: - r = handle_mmio(vcpu); +r = 0; + handle_mmio(vcpu); break; case KVM_EXIT_HLT: r = handle_halt(vcpu); -- 1.6.2.2 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 3/6] remove kvm_abi variable
We're not using this for anything Signed-off-by: Glauber Costa --- qemu-kvm.c |3 --- 1 files changed, 0 insertions(+), 3 deletions(-) diff --git a/qemu-kvm.c b/qemu-kvm.c index 8eeace4..a8298e4 100644 --- a/qemu-kvm.c +++ b/qemu-kvm.c @@ -43,7 +43,6 @@ int kvm_pit = 1; int kvm_pit_reinject = 1; int kvm_nested = 0; - static KVMState *kvm_state; kvm_context_t kvm_context; @@ -79,7 +78,6 @@ static LIST_HEAD(, ioperm_data) ioperm_head; #define ALIGN(x, y) (((x)+(y)-1) & ~((y)-1)) -int kvm_abi = EXPECTED_KVM_API_VERSION; int kvm_page_size; #ifdef KVM_CAP_SET_GUEST_DEBUG @@ -429,7 +427,6 @@ int kvm_init(int smp_cpus) fprintf(stderr, "kvm userspace version too old\n"); goto out_close; } - kvm_abi = r; kvm_page_size = getpagesize(); kvm_state = qemu_mallocz(sizeof(*kvm_state)); kvm_context = &kvm_state->kvm_context; -- 1.6.2.2 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
RE: VGA address mapping?
Would also like to mention I am not using Qemu and building some basic IO models around KVM (only using libkvm.h) -Abhishek From: Saksena, Abhishek Sent: Tuesday, July 21, 2009 11:13 AM To: kvm@vger.kernel.org Subject: VGA address mapping? Hi I am implementing a VGA Device model. The model provides functions to read/write VGA memory space. Just for testing I want to capture memory reads/writes to addresses 0xA->0xC and forward it to my VGA model. I have used following function to create physical ram int kvm_create ( kvm_context_t kvm, unsigned long phys_mem_bytes, void ** phys_mem ) The function comments says that this creates a new virtual machine, maps physical RAM to it, and creates a virtual CPU for it. Memory gets mapped for addresses 0->0xA, 0xC->phys_mem_bytes. I was expecting mimio read/write callbacks to capture transactions between 0xA->0xC but I don't see that happening. My question is how I can configure KVM to forward me reads/writes for VGA address space? Thanks Abhishek -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] kvm: Drop obsolete cpu_get/put in make_all_cpus_request
Marcelo Tosatti wrote: > On Tue, Jul 21, 2009 at 10:24:08AM +0200, Jan Kiszka wrote: >> Marcelo Tosatti wrote: >>> Jan, >>> >>> This was suggested but we thought it might be safer to keep the >>> get_cpu/put_cpu pair in case -rt kernels require it (which might be >>> bullshit, but nobody verified). >> -rt stumbles over both patterns (that's why I stumbled over it in the >> first place: get_cpu disables preemption, but spin_lock is a sleeping >> lock under -rt) and actually requires requests_lock to become >> raw_spinlock_t. Reordering get_cpu and spin_lock would be another >> option, but not really a gain for both scenarios. > > I see. > >> So unless there is a way to make the whole critical section preemptible >> (thus migration-agnostic), I think we can micro-optimize it like this. > > Can't you switch requests_lock to be raw_spinlock_t then? (or whatever > is necessary to make it -rt compatible). > raw_spinlock_t over -rt is not comparable to raw_spinlock_t over mainline. So I'm currently carrying a local patch with #ifdef CONFIG_PREEMPT_RT raw_spinlock_t some_lock; #else spinlock_t some_lock; #endif for all locks that need it (there are three ATM). That said, I'm suspecting there are more problems with kvm over -rt right now. I'm seeing significant latency peeks on the host. Still investigating, though. However I don't think we should bother too much about -rt compliance in mainline unless the diff is trivial and basically irrelevant for the common non-rt cases. Jan signature.asc Description: OpenPGP digital signature
Re: [PATCH] kvm: Drop obsolete cpu_get/put in make_all_cpus_request
On Wed, Jul 22, 2009 at 01:29:24AM +0200, Jan Kiszka wrote: > Marcelo Tosatti wrote: > > On Tue, Jul 21, 2009 at 10:24:08AM +0200, Jan Kiszka wrote: > >> Marcelo Tosatti wrote: > >>> Jan, > >>> > >>> This was suggested but we thought it might be safer to keep the > >>> get_cpu/put_cpu pair in case -rt kernels require it (which might be > >>> bullshit, but nobody verified). > >> -rt stumbles over both patterns (that's why I stumbled over it in the > >> first place: get_cpu disables preemption, but spin_lock is a sleeping > >> lock under -rt) and actually requires requests_lock to become > >> raw_spinlock_t. Reordering get_cpu and spin_lock would be another > >> option, but not really a gain for both scenarios. > > > > I see. > > > >> So unless there is a way to make the whole critical section preemptible > >> (thus migration-agnostic), I think we can micro-optimize it like this. > > > > Can't you switch requests_lock to be raw_spinlock_t then? (or whatever > > is necessary to make it -rt compatible). > > > > raw_spinlock_t over -rt is not comparable to raw_spinlock_t over > mainline. So I'm currently carrying a local patch with > > #ifdef CONFIG_PREEMPT_RT > raw_spinlock_t some_lock; > #else > spinlock_t some_lock; > #endif > > for all locks that need it (there are three ATM). > > That said, I'm suspecting there are more problems with kvm over -rt > right now. I'm seeing significant latency peeks on the host. Still > investigating, though. > > However I don't think we should bother too much about -rt compliance in > mainline unless the diff is trivial and basically irrelevant for the > common non-rt cases. > > Jan OK then, applied. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] fix serious regression
Today I found a very catastrophic regression: I cannot run my mission critical servers running RHL7.1 anymore. This is a total disaster. Fortunately, I was able to isolate the commit that caused it: commit bb598da496c040d42dde564bd8ace181be52293e Author: Glauber Costa Date: Mon Jul 6 16:12:52 2009 -0400 This guy is certainly stupid, and deserves punishment. It means I'll be writting code using emacs for the next week. Marcelo, please apply Signed-off-by: Glauber Costa --- qemu-kvm.c |2 -- 1 files changed, 0 insertions(+), 2 deletions(-) diff --git a/qemu-kvm.c b/qemu-kvm.c index e200dea..393c5cc 100644 --- a/qemu-kvm.c +++ b/qemu-kvm.c @@ -1003,8 +1003,6 @@ int pre_kvm_run(kvm_context_t kvm, CPUState *env) { kvm_arch_pre_kvm_run(kvm->opaque, env); -if (env->exit_request) -return 1; pthread_mutex_unlock(&qemu_mutex); return 0; } -- 1.6.2.2 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
USB passthrough does not work
I am using kvm-88 and trying to passthrough an USB storage device (usb memory stick) via -usb -usbdevice host:: In Vista x64 the device appears but has error code 10 (this device cannot start). Any ideas? Andreas -- GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT! Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: USB passthrough does not work
I don't know if this might be effecting you, but KVM does not support USB-2. More and more devices these days are USB-2 only. It's at least worth checking out. In any case... copying files over USB-1.1 is going to be terribly painful. --Iggy On Tuesday 21 July 2009 19:44:33 Andreas Kinzler wrote: > I am using kvm-88 and trying to passthrough an USB > storage device (usb memory stick) via > > -usb -usbdevice host:: > > In Vista x64 the device appears but has error code > 10 (this device cannot start). > > Any ideas? > > Andreas -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Autotest] [KVM_AUTOTEST] add kvm hugepage variant
The patch looks to be pretty clean to me. I was running a small hugetlbfs script doing the same, but its good now as the script is being incorporated in the test. On Tue, Jul 21, 2009 at 9:34 PM, Lukáš Doktor wrote: > Well, thank you for notifications, I'll keep them in my mind. > > Also the problem with mempath vs. mem-path is solved. It was just a misspell > in one version of KVM. > > * fixed patch attached > > Dne 20.7.2009 14:58, Lucas Meneghel Rodrigues napsal(a): >> >> On Fri, 2009-07-10 at 12:01 +0200, Lukáš Doktor wrote: >>> >>> After discussion I split the patches. >> >> Hi Lukáš, sorry for the delay answering your patch. Looks good to me in >> general, I have some remarks to make: >> >> 1) When posting patches to the autotest kvm tests, please cross post the >> autotest mailing list (autot...@test.kernel.org) and the KVM list. >> >> 2) About scripts to prepare the environment to perform tests - we've had >> some discussion about including shell scripts on autotest. Bottom line, >> autotest has a policy of not including non python code when possible >> [1]. So, would you mind re-creating your hugepage setup code in python >> and re-sending it? >> >> Thanks for your contribution, looking forward getting it integrated to >> our tests. >> >> [1] Unless when it is not practical for testing purposes - writing tests >> in C is just fine, for example. >> >>> This patch adds kvm_hugepage variant. It prepares the host system and >>> start vm with -mem-path option. It does not clean after itself, because >>> it's impossible to unmount and free hugepages before all guests are >>> destroyed. >>> >>> I need to ask you what to do with change of qemu parameter. Newest >>> versions are using -mempath insted of -mem-path. This is impossible to >>> fix using current config file. I can see 2 solutions: >>> 1) direct change in kvm_vm.py (parse output and try another param) >>> 2) detect qemu capabilities outside and create additional layer (better >>> for future occurrence) >>> >>> Dne 9.7.2009 11:24, Lukáš Doktor napsal(a): This patch adds kvm_hugepage variant. It prepares the host system and start vm with -mem-path option. It does not clean after itself, because it's impossible to unmount and free hugepages before all guests are destroyed. There is also added autotest.libhugetlbfs test. I need to ask you what to do with change of qemu parameter. Newest versions are using -mempath insted of -mem-path. This is impossible to fix using current config file. I can see 2 solutions: 1) direct change in kvm_vm.py (parse output and try another param) 2) detect qemu capabilities outside and create additional layer (better for future occurrence) Tested by:ldok...@redhat.com on RHEL5.4 with kvm-83-72.el5 >> > > > ___ > Autotest mailing list > autot...@test.kernel.org > http://test.kernel.org/cgi-bin/mailman/listinfo/autotest > > -- Sudhir Kumar -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html