Ping.
Have I committed a bug-reporting sin in the mail below or is everyone
simply too busy to look at this kvm-related crash?
On 07/09/12 11:57, Chris Clayton wrote:
Hi,
When I run WinXP SP3 through qemu-kvm-1.1.0 on linux kernel 3.5.0-rc6, I
get a segmentation fault within 3 or 4 minutes m
On Wed, Jul 11, 2012 at 08:09:42AM +0100, Chris Clayton wrote:
> Ping.
>
> Have I committed a bug-reporting sin in the mail below or is
> everyone simply too busy to look at this kvm-related crash?
>
Since you have good and bad points can you bisect the problem?
> On 07/09/12 11:57, Chris Clayto
On 07/11/12 08:12, Gleb Natapov wrote:
On Wed, Jul 11, 2012 at 08:09:42AM +0100, Chris Clayton wrote:
Ping.
Have I committed a bug-reporting sin in the mail below or is
everyone simply too busy to look at this kvm-related crash?
Since you have good and bad points can you bisect the problem?
On Wed, Jul 11, 2012 at 08:18:17AM +0100, Chris Clayton wrote:
> On 07/11/12 08:12, Gleb Natapov wrote:
> >On Wed, Jul 11, 2012 at 08:09:42AM +0100, Chris Clayton wrote:
> >>Ping.
> >>
> >>Have I committed a bug-reporting sin in the mail below or is
> >>everyone simply too busy to look at this kvm-
DMY guys.
I've sorted it.
We're happy now.
> From: verucasal...@hotmail.co.uk
> To: kvm@vger.kernel.org
> Subject: Issue with mouse-capture
> Date: Mon, 9 Jul 2012 08:16:10 +
>
>
>
> I realise you guys are very busy, but I'm about to go into the Qemu-k
On 07/11/2012 03:56 AM, Alexander Graf wrote:
> Hi Avi,
>
> This is my current patch queue for ppc. Please pull.
>
> It contains the following changes:
>
> * VERY IMPORTANT (please forward to -stable):
> Fix H_CEDE with PR KVM and newer guest kernels
If it's important please separate it a
On 07/11/2012 03:56 AM, Alexander Graf wrote:
> Hi Avi,
>
> This is my current patch queue for ppc. Please pull.
>
> * Book3S HV: Fix locks (should be in your tree already?)
>
Indeed it's in 3.5 already. The way to check it to look for it in
auto-next, which includes master, upstream, and ne
On 07/09/2012 09:20 AM, Raghavendra K T wrote:
> Signed-off-by: Raghavendra K T
>
> Noting pause loop exited vcpu helps in filtering right candidate to yield.
> Yielding to same vcpu may result in more wastage of cpu.
>
>
> struct kvm_lpage_info {
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/
On 07/10/2012 12:47 AM, Andrew Theurer wrote:
>
> For the cpu threads in the host that are actually active (in this case
> 1/2 of them), ~50% of their time is in kernel and ~43% in guest. This
> is for a no-IO workload, so that's just incredible to see so much cpu
> wasted. I feel that
On 07/09/2012 10:55 AM, Christian Borntraeger wrote:
> On 09/07/12 08:20, Raghavendra K T wrote:
>> Currently Pause Looop Exit (PLE) handler is doing directed yield to a
>> random VCPU on PL exit. Though we already have filtering while choosing
>> the candidate to yield_to, we can do better.
>>
>>
On 07/03/2012 10:21 PM, Alex Williamson wrote:
> Here's the latest iteration of adding an interface to assert and
> de-assert level interrupts from external drivers like vfio. These
> apply on top of the previous argument cleanup, documentation, and
> sanitization patches for irqfd. It would be g
On 07/09/2012 07:53 PM, Alex Williamson wrote:
> The kernel no longer allows us to pass NULL for the hard handler
> without also specifying IRQF_ONESHOT. IRQF_ONESHOT imposes latency
> in the exit path that we don't need for MSI interrupts. Long term
> we'd like to inject these interrupts from th
On 06/19/2012 06:42 PM, Chegu Vinod wrote:
Hello,
Wanted to share some preliminary data from live migration experiments on a setup
that is perhaps one of the larger ones.
We used Juan's "huge_memory" patches (without the separate migration thread) and
measured the total migration time and the t
On 06/19/2012 08:22 PM, Michael Roth wrote:
On Tue, Jun 19, 2012 at 11:34:42PM +0900, Takuya Yoshikawa wrote:
On Tue, 19 Jun 2012 09:01:36 -0500
Anthony Liguori wrote:
I'm not at all convinced that postcopy is a good idea. There needs a clear
expression of what the value proposition is that'
On 07/06/2012 07:22 PM, Jan Kiszka wrote:
> Replace the home-brewed qdev property for PCI host addresses with the
> new upstream version.
>
Thanks, applied.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the
On 11/07/12 11:06, Avi Kivity wrote:
[...]
>> Almost all s390 kernels use diag9c (directed yield to a given guest cpu) for
>> spinlocks, though.
>
> Perhaps x86 should copy this.
See arch/s390/lib/spinlock.c
The basic idea is using several heuristics:
- loop for a given amount of loops
- check i
On 2012-07-11 11:53, Avi Kivity wrote:
> On 07/03/2012 10:21 PM, Alex Williamson wrote:
>> Here's the latest iteration of adding an interface to assert and
>> de-assert level interrupts from external drivers like vfio. These
>> apply on top of the previous argument cleanup, documentation, and
>> s
This is v2 of the ACPI memory hotplug prototype for x86_64 target.
Changes v1->v2
- memory map is automatically calculated for hotplug dimms. Dimms are added from
top-of-memory skipping the pci hole at [PCI_HOLE_START, 4G).
- Renamed from "-memslot" to "-dimm". Commands changed to "dimm_add",
"d
This allows to extract the beginning, end and name of a Device object.
Signed-off-by: Vasilis Liaskovitis
---
tools/acpi_extract.py | 28
1 files changed, 28 insertions(+), 0 deletions(-)
diff --git a/tools/acpi_extract.py b/tools/acpi_extract.py
index 167a322..cb
Extend the DSDT to include methods for handling memory hot-add and hot-remove
notifications and memory device status requests. These functions are called
from the memory device SSDT methods.
Signed-off-by: Vasilis Liaskovitis
---
src/acpi-dsdt.dsl | 70 +
A 32-byte register is used to present up to 256 hotplug-able memory devices
to BIOS and OSPM. Hot-add and hot-remove functions trigger an ACPI hotplug
event through these. Only reads are allowed from these registers.
An ACPI hot-remove event but needs to wait for OSPM to eject the device.
We use a
Guest can respond to ACPI hotplug events e.g. with _EJ or _OST method.
This patch implements a tail queue to store guest notifications for memory
hot-add and hot-remove requests.
Guest responses for memory hotplug command on a per-dimm basis can be detected
with the new hmp command "info memhp" or
Add support for _OST method. _OST method will write into the correct I/O byte to
signal success / failure of hot-add or hot-remove to qemu.
Signed-off-by: Vasilis Liaskovitis
---
src/acpi-dsdt.dsl | 46 ++
src/ssdt-mem.dsl |4
2 files chan
This allows failed hot operations to be retried at anytime. This only
works for guests that use _OST notification. Other guests cannot retry failed
hot operations on same devices until after reboot.
Signed-off-by: Vasilis Liaskovitis
---
hw/acpi_piix4.c | 20 +++-
hw/dimm.c
Implement batch dimm creation command line options. These could be useful for
not bloating the command line with a large number of dimms.
syntax: -dimms pfx=poolid,size=sz,num=n
Will create numdimms dimms with ids poolid0, ..., poolidn-1. Each dimm has a
size of sz.
Implement -dimmpop option
Each hotplug-able memory slot is a SysBusDevice. A hot-add operation for a
particular dimm creates a new MemoryRegion of the given physical address
offset, size and node proximity, and attaches it to main system memory as a
sub_region. A hot-remove operation detaches and frees the MemoryRegion from
In order to hotplug memory between RamSize and BUILD_PCIMEM_START, the pci
window needs to start at BUILD_PCIMEM_START (0xe000).
Otherwise, the guest cannot online new dimms at those ranges due to pci_root
window conflicts. (workaround for linux guest is booting with pci=nocrs)
Signed-off-by:
Live migration works after memory hot-add events, as long as the
qemu command line "-dimm" arguments are changed on the destination host
to specify "populated=on" for the dimms that have been hot-added.
If a command-line change has not occured, the destination host does not yet
have the correspond
This implements batch monitor operations for hot-add and hot-remove. These are
probably better suited for a higher-level management layer, but are useful for
testing. Let me know if there is interest for such commands upstream.
syntax: mem_increase poolid num
will hotplug num dimms from pool pooli
in case of hot-remove or hot-add failure, the dimm bitmaps in qemu and Seabios
are inconsistent with the true state of the DIMM devices. The "populated" field
of the DimmState reflects the true state of the device. This inconsistency means
that a failed operation cannot be retried.
Ths patch updat
This reverts bitmap state in the case of a failed hot operation, in order to
allow retry of failed hot operations
Signed-off-by: Vasilis Liaskovitis
---
src/acpi-dsdt.dsl |4
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/src/acpi-dsdt.dsl b/src/acpi-dsdt.dsl
index 1c253
Returns total memory of guest in bytes, including hotplugged memory.
Signed-off-by: Vasilis Liaskovitis
---
hmp-commands.hx |2 ++
hmp.c|7 +++
hmp.h|1 +
hw/dimm.c| 15 +++
monitor.c|7 +++
qapi-schema.json | 12 ++
This allows qemu to receive notifications from the guest OS on success or
failure of a memory hotplug request. The guest OS needs to implement the _OST
functionality for this to work (linux-next: http://lkml.org/lkml/2012/6/25/321)
Also add new _OST registers in docs/specs/acpi_hotplug.txt
Signed-
Hot-add hmp syntax: dimm_add dimmid
Hot-remove hmp syntax: dimm_del dimmid
Respective qmp commands are "dimm-add", "dimm-del".
Signed-off-by: Vasilis Liaskovitis
---
hmp-commands.hx | 32
monitor.c | 11 +++
monitor.h |3 +++
qmp-comm
Syntax: "-dimm id=name,size=sz,node=pxm,populated=on|off"
The starting physical address for all dimms is calculated automatically from top
of memory, skipping the pci hole at [PCI_HOLE_START, 4G).
"populated=on" means the dimm is populated at machine startup. Default is off.
"node" is defining nu
The numa_fw_cfg paravirt interface is extended to include SRAT information for
all hotplug-able dimms. There are 3 words for each hotplug-able memory slot,
denoting start address, size and node proximity. The new info is appended after
existing numa info, so that the fw_cfg layout does not break.
Dimm physical address offsets are calculated automatically and memory map is
adjusted accordingly. If a DIMM can fit before the PCI_HOLE_START (currently
0xe000), it will be added normally, otherwise its physical address will be
above 4GB.
Signed-off-by: Vasilis Liaskovitis
---
hw/pc.c
The memory device generation is guided by qemu paravirt info. Seabios
first uses the info to setup SRAT entries for the hotplug-able memory slots.
Afterwards, build_memssdt uses the created SRAT entries to generate
appropriate memory device objects. One memory device (and corresponding SRAT
entry)
Define SSDT hotplug-able memory devices in _SB namespace. The dynamically
generated SSDT includes per memory device hotplug methods. These methods
just call methods defined in the DSDT. Also dynamically generate a MTFY
method and a MEON array of the online/available memory devices. ACPI
extraction
At 07/11/2012 06:31 PM, Vasilis Liaskovitis Wrote:
> The memory device generation is guided by qemu paravirt info. Seabios
> first uses the info to setup SRAT entries for the hotplug-able memory slots.
> Afterwards, build_memssdt uses the created SRAT entries to generate
> appropriate memory device
On 07/11/2012 01:18 PM, Jan Kiszka wrote:
> On 2012-07-11 11:53, Avi Kivity wrote:
>> On 07/03/2012 10:21 PM, Alex Williamson wrote:
>>> Here's the latest iteration of adding an interface to assert and
>>> de-assert level interrupts from external drivers like vfio. These
>>> apply on top of the pr
On 07/11/2012 02:23 PM, Avi Kivity wrote:
On 07/09/2012 09:20 AM, Raghavendra K T wrote:
Signed-off-by: Raghavendra K T
Noting pause loop exited vcpu helps in filtering right candidate to yield.
Yielding to same vcpu may result in more wastage of cpu.
struct kvm_lpage_info {
diff --git a/ar
On 07/11/2012 01:17 PM, Christian Borntraeger wrote:
> On 11/07/12 11:06, Avi Kivity wrote:
> [...]
>>> Almost all s390 kernels use diag9c (directed yield to a given guest cpu)
>>> for spinlocks, though.
>>
>> Perhaps x86 should copy this.
>
> See arch/s390/lib/spinlock.c
> The basic idea is usi
On 11.07.2012, at 13:04, Avi Kivity wrote:
> On 07/11/2012 01:17 PM, Christian Borntraeger wrote:
>> On 11/07/12 11:06, Avi Kivity wrote:
>> [...]
Almost all s390 kernels use diag9c (directed yield to a given guest cpu)
for spinlocks, though.
>>>
>>> Perhaps x86 should copy this.
>>
On 07/11/2012 01:52 PM, Raghavendra K T wrote:
> On 07/11/2012 02:23 PM, Avi Kivity wrote:
>> On 07/09/2012 09:20 AM, Raghavendra K T wrote:
>>> Signed-off-by: Raghavendra K T
>>>
>>> Noting pause loop exited vcpu helps in filtering right candidate to
>>> yield.
>>> Yielding to same vcpu may result
On 11/07/12 13:04, Avi Kivity wrote:
> On 07/11/2012 01:17 PM, Christian Borntraeger wrote:
>> On 11/07/12 11:06, Avi Kivity wrote:
>> [...]
Almost all s390 kernels use diag9c (directed yield to a given guest cpu)
for spinlocks, though.
>>>
>>> Perhaps x86 should copy this.
>>
>> See arc
On 2012-07-11 12:49, Avi Kivity wrote:
> On 07/11/2012 01:18 PM, Jan Kiszka wrote:
>> On 2012-07-11 11:53, Avi Kivity wrote:
>>> On 07/03/2012 10:21 PM, Alex Williamson wrote:
Here's the latest iteration of adding an interface to assert and
de-assert level interrupts from external drivers
On 07/11/2012 02:16 PM, Alexander Graf wrote:
>>
>>> yes the data structure itself seems based on the algorithm
>>> and not on arch specific things. That should work. If we move that to
>>> common
>>> code then s390 will use that scheme automatically for the cases were we
>>> call
>>> kvm_vcpu
On 07/11/2012 02:18 PM, Christian Borntraeger wrote:
> On 11/07/12 13:04, Avi Kivity wrote:
>> On 07/11/2012 01:17 PM, Christian Borntraeger wrote:
>>> On 11/07/12 11:06, Avi Kivity wrote:
>>> [...]
> Almost all s390 kernels use diag9c (directed yield to a given guest cpu)
> for spinlocks,
VHOST_SET_MEM_TABLE failed: Operation not supported
In vhost_set_memory(), We have
if (mem.padding)
return -EOPNOTSUPP;
So, we need to zero struct vhost_memory.
Signed-off-by: Asias He
---
tools/kvm/virtio/net.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
If vhost is enabled for a virtio device, vhost will poll the ioeventfd
in kernel side and there is no need to poll it in userspace. Otherwise,
both vhost kernel and userspace will race to poll.
Signed-off-by: Asias He
---
tools/kvm/include/kvm/ioeventfd.h |2 +-
tools/kvm/ioeventfd.c
Current qemu-kvm master merged with latest upstream fails on startup:
(gdb) bt
#0 0x7fdcd4a047a0 in kvm_vcpu_ioctl (env=0x0, type=-1071075694) at
/home/tlv/akivity/qemu/kvm-all.c:1602
#1 0x7fdcd49c9fda in kvm_apic_enable_tpr_reporting
(s=0x7fdcd75af6c0, enable=false) at
/home/tlv/akivity
On 07/11/2012 02:23 PM, Jan Kiszka wrote:
>>
>> I'd appreciate a couple of examples for formality's sake.
>
> From the top of my head: NVIDIA FX3700 (granted, legacy by now), Atheros
> AR9287. For others, I need to check.
Thanks.
>>
>>> And then there is not easily replaceable legacy hardware
On 11.07.2012, at 13:23, Avi Kivity wrote:
> On 07/11/2012 02:16 PM, Alexander Graf wrote:
>>>
yes the data structure itself seems based on the algorithm
and not on arch specific things. That should work. If we move that to
common
code then s390 will use that scheme automat
On 07/11/2012 03:47 PM, Christian Borntraeger wrote:
On 11/07/12 11:06, Avi Kivity wrote:
[...]
Almost all s390 kernels use diag9c (directed yield to a given guest cpu) for
spinlocks, though.
Perhaps x86 should copy this.
See arch/s390/lib/spinlock.c
The basic idea is using several heuristi
On 2012-07-11 13:46, Avi Kivity wrote:
> Current qemu-kvm master merged with latest upstream fails on startup:
>
> (gdb) bt
> #0 0x7fdcd4a047a0 in kvm_vcpu_ioctl (env=0x0, type=-1071075694) at
> /home/tlv/akivity/qemu/kvm-all.c:1602
> #1 0x7fdcd49c9fda in kvm_apic_enable_tpr_reporting
>
On 11/07/12 13:51, Raghavendra K T wrote:
Almost all s390 kernels use diag9c (directed yield to a given guest cpu)
for spinlocks, though.
>>>
>>> Perhaps x86 should copy this.
>>
>> See arch/s390/lib/spinlock.c
>> The basic idea is using several heuristics:
>> - loop for a given amount o
On 07/11/12 12:31, Vasilis Liaskovitis wrote:
> In order to hotplug memory between RamSize and BUILD_PCIMEM_START, the pci
> window needs to start at BUILD_PCIMEM_START (0xe000).
> Otherwise, the guest cannot online new dimms at those ranges due to pci_root
> window conflicts. (workaround for l
On 07/11/2012 02:55 PM, Jan Kiszka wrote:
> On 2012-07-11 13:46, Avi Kivity wrote:
>> Current qemu-kvm master merged with latest upstream fails on startup:
>>
>> (gdb) bt
>> #0 0x7fdcd4a047a0 in kvm_vcpu_ioctl (env=0x0, type=-1071075694) at
>> /home/tlv/akivity/qemu/kvm-all.c:1602
>> #1 0x00
On 07/11/2012 04:48 PM, Avi Kivity wrote:
On 07/11/2012 01:52 PM, Raghavendra K T wrote:
On 07/11/2012 02:23 PM, Avi Kivity wrote:
On 07/09/2012 09:20 AM, Raghavendra K T wrote:
Signed-off-by: Raghavendra K T
Noting pause loop exited vcpu helps in filtering right candidate to
yield.
Yielding
On 2012-07-11 13:58, Avi Kivity wrote:
> On 07/11/2012 02:55 PM, Jan Kiszka wrote:
>> On 2012-07-11 13:46, Avi Kivity wrote:
>>> Current qemu-kvm master merged with latest upstream fails on startup:
>>>
>>> (gdb) bt
>>> #0 0x7fdcd4a047a0 in kvm_vcpu_ioctl (env=0x0, type=-1071075694) at
>>> /ho
On 07/11/2012 02:59 PM, Jan Kiszka wrote:
>>>
>>> I will try to reproduce. Is there a tree of the merge available?
>>
>> I just merged upstream into qemu-kvm master. For some reason there were
>> no conflicts.
>
> A rare moment, I guess. ;)
I'll put it down to random chance until we can figure
On 07/11/2012 05:25 PM, Christian Borntraeger wrote:
On 11/07/12 13:51, Raghavendra K T wrote:
Almost all s390 kernels use diag9c (directed yield to a given guest cpu) for
spinlocks, though.
Perhaps x86 should copy this.
See arch/s390/lib/spinlock.c
The basic idea is using several heuristic
On 07/11/2012 03:04 PM, Avi Kivity wrote:
specific command line or guest?
>>>
>>> qemu-system-x86_64
>>
>> Just did the same, but it's all fine here.
>
> Ok, I'll debug it. Probably something stupid like a miscompile.
Indeed, a simple clean build fixed it up. Paolo, it looks like
autodep
Il 11/07/2012 14:08, Avi Kivity ha scritto:
> specific command line or guest?
>>>
>>> qemu-system-x86_64
>>> >>
>>> >> Just did the same, but it's all fine here.
>> >
>> > Ok, I'll debug it. Probably something stupid like a miscompile.
> Indeed, a simple clean build fixed it up.
- Original Message -
> >
> > Hm, suppose we're the next-in-line for a ticket lock and exit due
> > to
> > PLE. The lock holder completes and unlocks, which really assigns
> > the
> > lock to us. So now we are the lock owner, yet we are marked as
> > don't
> > yield-to-us in the PLE code
On 07/11/2012 02:52 PM, Alexander Graf wrote:
>
> On 11.07.2012, at 13:23, Avi Kivity wrote:
>
>> On 07/11/2012 02:16 PM, Alexander Graf wrote:
> yes the data structure itself seems based on the algorithm
> and not on arch specific things. That should work. If we move that to
>
On 07/11/2012 05:21 PM, Raghavendra K T wrote:
On 07/11/2012 03:47 PM, Christian Borntraeger wrote:
On 11/07/12 11:06, Avi Kivity wrote:
[...]
So there is no win here, but there are other cases were diag44 is
used, e.g. cpu_relax.
I have to double check with others, if these cases are critical
On 06/21/2012 04:48 AM, Xiao Guangrong wrote:
> On 06/20/2012 10:11 PM, Takuya Yoshikawa wrote:
>
>
>> We can change the debug message later if needed.
>
>
> Actually, i am going to use tracepoint instead of
> these debug code.
Yes, these should be in the kvmmmu namespace.
--
error compilin
On 06/20/2012 10:56 AM, Xiao Guangrong wrote:
> Changlog:
> - always atomicly update the spte if it can be updated out of mmu-lock
> - rename spte_can_be_writable() to spte_is_locklessly_modifiable()
> - cleanup and comment spte_write_protect()
>
> Performance result:
> (The benchmark can be found
On 07/11/2012 02:30 PM, Avi Kivity wrote:
On 07/10/2012 12:47 AM, Andrew Theurer wrote:
For the cpu threads in the host that are actually active (in this case
1/2 of them), ~50% of their time is in kernel and ~43% in guest. This
is for a no-IO workload, so that's just incredible to see so much
On 07/11/2012 07:29 PM, Raghavendra K T wrote:
On 07/11/2012 02:30 PM, Avi Kivity wrote:
On 07/10/2012 12:47 AM, Andrew Theurer wrote:
For the cpu threads in the host that are actually active (in this case
1/2 of them), ~50% of their time is in kernel and ~43% in guest. This
is for a no-IO wor
Hello Joerg,
Joerg Roedel wrote:
> On Tue, Jun 05, 2012 at 08:27:05AM -0600, Alex Williamson wrote:
>> Joerg, the question is whether the multifunction device above allows
>> peer-to-peer between functions that could bypass the iommu. If not, we
>> can make it the first entry in device specific a
On 07/11/2012 01:32 PM, Vasilis Liaskovitis wrote:
> Implement batch dimm creation command line options. These could be useful for
> not bloating the command line with a large number of dimms.
IMO this is unneeded. With a management tool there is no problem
generating a long command line; from th
On 07/11/2012 04:31 AM, Vasilis Liaskovitis wrote:
> Guest can respond to ACPI hotplug events e.g. with _EJ or _OST method.
> This patch implements a tail queue to store guest notifications for memory
> hot-add and hot-remove requests.
>
> Guest responses for memory hotplug command on a per-dimm b
On 07/11/2012 04:32 AM, Vasilis Liaskovitis wrote:
> Returns total memory of guest in bytes, including hotplugged memory.
>
> Signed-off-by: Vasilis Liaskovitis
Should this instead be merged with query-balloon output, so that we have
a single command that shows all aspects of memory usage (both
Hi Avi,
This is my current patch queue for ppc against master.
It contains an important bug fix which can lead to guest freezes when
using PAPR guests with PR KVM.
Please pull.
Alex
The following changes since commit 85b7059169e128c57a3a8a3e588fb89cb2031da1:
Xiao Guangrong (1):
KVM:
From: Benjamin Herrenschmidt
H_CEDE should enable the vcpu's MSR:EE bit. It does on "HV" KVM (it's
burried in the assembly code though) and as far as I can tell, qemu
does it as well.
Signed-off-by: Benjamin Herrenschmidt
Signed-off-by: Alexander Graf
---
arch/powerpc/kvm/book3s_pr_papr.c |
On 07/11/2012 06:38 PM, Alexander Graf wrote:
> Hi Avi,
>
> This is my current patch queue for ppc against master.
> It contains an important bug fix which can lead to guest freezes when
> using PAPR guests with PR KVM.
>
> Please pull.
Thanks, pulled.
--
error compiling committee.c: too many
VHOST_SET_MEM_TABLE failed: Operation not supported
In vhost_set_memory(), We have
if (mem.padding)
return -EOPNOTSUPP;
So, we need to zero struct vhost_memory.
Signed-off-by: Asias He
---
tools/kvm/virtio/net.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
If vhost is enabled for a virtio device, vhost will poll the ioeventfd
in kernel side and there is no need to poll it in userspace. Otherwise,
both vhost kernel and userspace will race to poll.
Signed-off-by: Asias He
---
tools/kvm/include/kvm/ioeventfd.h |2 +-
tools/kvm/include/kvm/virtio.
On 07/11/2012 07:08 PM, Asias He wrote:
> VHOST_SET_MEM_TABLE failed: Operation not supported
>
> In vhost_set_memory(), We have
>
> if (mem.padding)
> return -EOPNOTSUPP;
>
> So, we need to zero struct vhost_memory.
>
Is this due to a change in vhost?
--
error compi
https://bugzilla.kernel.org/show_bug.cgi?id=15988
Alan changed:
What|Removed |Added
Status|NEW |RESOLVED
CC|
Hi,
On Wed, Jul 11, 2012 at 06:48:38PM +0800, Wen Congyang wrote:
> > +if (enabled)
> > +add_e820(mem_base, mem_len, E820_RAM);
>
> add_e820() is declared in memmap.h. You should include this header file,
> otherwise, seabios cannot be built.
thanks. you had the same comment
Hi,
On Wed, Jul 11, 2012 at 01:56:19PM +0200, Gerd Hoffmann wrote:
> On 07/11/12 12:31, Vasilis Liaskovitis wrote:
> > In order to hotplug memory between RamSize and BUILD_PCIMEM_START, the pci
> > window needs to start at BUILD_PCIMEM_START (0xe000).
> > Otherwise, the guest cannot online new
Hi,
On Wed, Jul 11, 2012 at 08:59:03AM -0600, Eric Blake wrote:
> On 07/11/2012 04:31 AM, Vasilis Liaskovitis wrote:
> > Guest can respond to ACPI hotplug events e.g. with _EJ or _OST method.
> > This patch implements a tail queue to store guest notifications for memory
> > hot-add and hot-remove
Hi,
On Wed, Jul 11, 2012 at 09:14:29AM -0600, Eric Blake wrote:
> On 07/11/2012 04:32 AM, Vasilis Liaskovitis wrote:
> > Returns total memory of guest in bytes, including hotplugged memory.
> >
> > Signed-off-by: Vasilis Liaskovitis
>
> Should this instead be merged with query-balloon output, s
Hi,
On Wed, Jul 11, 2012 at 05:55:25PM +0300, Avi Kivity wrote:
> On 07/11/2012 01:32 PM, Vasilis Liaskovitis wrote:
> > Implement batch dimm creation command line options. These could be useful
> > for
> > not bloating the command line with a large number of dimms.
>
> IMO this is unneeded. Wi
Hi Andreas,
On Wed, Jul 11, 2012 at 04:26:30PM +0200, Andreas Hartmann wrote:
> May I please ask, if you meanwhile could get any information about
> potential peer-to-peer communication between the functions of the
> following multifunction device:
Good news: I actually found the right person to
Introduce struct disk_image_params to contain all the disk image parameters.
This is useful for adding more disk image parameters, e.g. disk image
cache mode.
Signed-off-by: Asias He
---
tools/kvm/builtin-run.c| 11 +--
tools/kvm/disk/core.c | 15 +---
On 05.07.2012, at 13:39, Caraman Mihai Claudiu-B02008 wrote:
>> -Original Message-
>> From: kvm-ppc-ow...@vger.kernel.org [mailto:kvm-ppc-
>> ow...@vger.kernel.org] On Behalf Of Alexander Graf
>> Sent: Wednesday, July 04, 2012 4:56 PM
>> To: Caraman Mihai Claudiu-B02008
>> Cc: kvm-...@vge
On 05.07.2012, at 14:54, Caraman Mihai Claudiu-B02008 wrote:
>> -Original Message-
>> From: Alexander Graf [mailto:ag...@suse.de]
>> Sent: Thursday, July 05, 2012 3:13 PM
>> To: Caraman Mihai Claudiu-B02008
>> Cc: kvm-...@vger.kernel.org; kvm@vger.kernel.org; linuxppc-
>> d...@lists.ozlab
On 07/06/2012 08:47 PM, Prarit Bhargava wrote:
> [PATCH 1/2] kvm, Add x86_hyper_kvm to complete detect_hypervisor_platform
> check [v3]
>
> While debugging I noticed that unlike all the other hypervisor code in the
> kernel, kvm does not have an entry for x86_hyper which is used in
> detect_hyper
On Fri, 2012-07-06 at 20:15 +, Nicholas A. Bellinger wrote:
> From: Nicholas Bellinger
>
> This patch changes virtio-scsi to use a new virtio_driver->scan() callback
> so that scsi_scan_host() can be properly invoked once virtio_dev_probe() has
> set add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK)
Hi,
We're running into a problem where we can't start up a single instance
of kvm-qemu with 5 or more virtual functions (for the ethernet card)
being passed to the guest. It's an Intel I350 NIC if it matters.
I noticed a discussion in a thread titled "[RFC PATCH 0/2] Expose
available KVM fr
Hello Joerg,
Joerg Roedel wrote:
> Hi Andreas,
>
> On Wed, Jul 11, 2012 at 04:26:30PM +0200, Andreas Hartmann wrote:
>> May I please ask, if you meanwhile could get any information about
>> potential peer-to-peer communication between the functions of the
>> following multifunction device:
>
> G
On Wed, 2012-07-11 at 10:52 -0600, Chris Friesen wrote:
> Hi,
>
> We're running into a problem where we can't start up a single instance
> of kvm-qemu with 5 or more virtual functions (for the ethernet card)
> being passed to the guest. It's an Intel I350 NIC if it matters.
>
> I noticed a dis
On Wed, 2012-07-11 at 14:51 +0300, Avi Kivity wrote:
> On 07/11/2012 02:23 PM, Jan Kiszka wrote:
> >>
> >> I'd appreciate a couple of examples for formality's sake.
> >
> > From the top of my head: NVIDIA FX3700 (granted, legacy by now), Atheros
> > AR9287. For others, I need to check.
>
> Thank
On 07/11/2012 01:34 PM, Alex Williamson wrote:
The limiting factor to increasing memory slots was searching the array.
That's since been fixed by caching mmio page table entries.
Thanks for the confirmation of my suspicions.
Do you know roughly when this went in? A commit ID would be great.
On Wed, 2012-07-11 at 21:32 +0200, Andreas Hartmann wrote:
> Hello Joerg,
>
> Joerg Roedel wrote:
> > Hi Andreas,
> >
> > On Wed, Jul 11, 2012 at 04:26:30PM +0200, Andreas Hartmann wrote:
> >> May I please ask, if you meanwhile could get any information about
> >> potential peer-to-peer communica
1 - 100 of 128 matches
Mail list logo