rtio_console driver in guest,
to see whether there is difference in virtio-blk performance and cpu usage.
2. Does not emulate virito-serial deivce, then install virtio_balloon
driver (and also dose not emulate virtio-balloon device),
to see whether virtio-blk performance degradation will happen.
3. Emulating virtio-balloon device instead of virtio-serial deivce ,
then to see whether the virtio-blk performance is hampered.
Base on the test result, corresponding analysis will be performed.
Any ideas?
Thanks,
Zhang Haoyu
be virtio_console.
>
>Looks like the ppoll takes more time to poll more fds.
>
>Some trace data with systemtap:
>
>12 fds:
>
>time rel_time symbol
>15 (+1) qemu_poll_ns [enter]
>18(+3) qemu_poll_ns [return]
>
>76 fd:
>
&g
The while loop variabal is "bs1", but "bs" is always passed to
bdrv_snapshot_delete_by_id_or_name.
Signed-off-by: Zhang Haoyu
---
savevm.c | 11 +--
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/savevm.c b/savevm.c
index e19ae0a..2d8eb96 100644
The while loop variabal is "bs1",
but "bs" is always passed to bdrv_snapshot_delete_by_id_or_name.
Broken in commit a89d89d, v1.7.0.
v1 -> v2:
* add broken commit id to commit message
Signed-off-by: Zhang Haoyu
Reviewed-by: Markus Armbruster
---
savevm.c | 11 +--
l check it.
>
>>Also, some old scheduler versions didn't put VMs on different
>>CPUs aggressively enough, this resulted in conflicts
>>when VMs compete for the same CPU.
>I will check it.
>
No aggressively contention for the same CPU, but when I pin each vcpu to
dif
Hi, all
Which version is best for commercial product, qemu-2.0.0 or other versions?
Any advices?
Thanks,
Zhang Haoyu
>> Hi, all
>>
>> Which version is best for commercial product, qemu-2.0.0 or other versions?
>> Any advices?
>>
>> Thanks,
>> Zhang Haoyu
>
>Use one of the downstreams: Red Hat, Fedora, Debian all ship QEMU and
>have active QEMU maintainers d
NULL
exp will be dereference.
do {
main_loop_wait(false);
if (state == TERMINATE) {
state = TERMINATING;
nbd_export_close(exp);
nbd_export_put(exp);
exp = NULL;
}
} while (state != TERMINATED);
Signed-off-by: Zhang
pen
|- qcow2_read_snapshots
|-- goto fail;
|-- qcow2_free_snapshots
|- goto fail;
|- qcow2_free_snapshots /* on this case, NULL snapshots dereference happened.
*/
Signed-off-by: Zhang Haoyu
---
block/qcow2-snapshot.c | 4
1 file changed, 4 insertions(+)
diff --git a/block/qcow2-snapshot.c b/block/qc
been performed before in the fail case of
>> qcow2_read_snapshots().
>> shown as below callstack,
>> qcow2_open
>> |- qcow2_read_snapshots
>> |-- goto fail;
>> |-- qcow2_free_snapshots
>> |- goto fail;
>> |- qcow2_free_snapshots /* on t
Note that the QEMU monitor commands are typically synchronous so they
>> will still block the VM.
>>
>
>If some of the requests are dropped by host and never return to QEMU, I think
>bdrv_drain_all() will still cause the hang. Even with virtio-blk, reset has
>such a call. Maybe we could add some -ETIMEDOUT machanism in QEMU's block
>layer.
>
>A workaround might be to configure the host storage to fail the IO after a
>timeout.
>
If -ETIMEOUT returned after a short time network disconnection, may unpredicted
fault happened in VM ?
e.g., the VM was reading important data(like, system data).
Does aio replay work for this case?
Thanks,
Zhang Haoyu
>Fam
Hi all,
We choose qemu-2.0.0 as distribution, and which version of seabios is best for
qemu-2.0.0 ?
Thanks,
Zhang Haoyu
l doc vfio.txt, I'm not sure should I unbind all of the devices
which belong to one iommu_group?
If so, because PF and its VFs belong to the same iommu_group, if I unbind the
PF, its VFs also diappeared.
I think I misunderstand someting,
any advises?
Thanks,
Zhang Haoyu
these external libs/sources (incorporated in the qemu-2.0.0
from http://wiki.qemu.org/Download) to qemu-2.0.1 to build the emulator?
Thanks,
Zhang Haoyu
03 ComponentID=01 EltType=Config
Link0: Desc: TargetPort=00 TargetComponent=01 AssocRCRB-
LinkType=MemMapped LinkValid+
Addr: fed19000
Capabilities: [d94 v1] #19
Kernel driver in use: pcieport
The intel 82599(02:00.0 or 02:00.1) is behind the pci bridge (00:01.1),
does 00:01.1 PCI bridge support ACS ?
Thanks,
Zhang Haoyu
>Alex
e-
RootSta: PME ReqID , PMEStatus- PMEPending-
DevCap2: Completion Timeout: Range BC, TimeoutDis+ ARIFwd-
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- ARIFwd-
LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-,
Selectable De-emphasis: -6dB
Transmit Margin: Normal Operating Range,
EnterModifiedCompliance- ComplianceSOS-
Compliance De-emphasis: -6dB
LnkSta2: Current De-emphasis Level: -6dB,
EqualizationComplete-, EqualizationPhase1-
EqualizationPhase2-, EqualizationPhase3-,
LinkEqualizationRequest-
Capabilities: [80] MSI: Enable- Count=1/1 Maskable- 64bit-
Address: Data:
Capabilities: [90] Subsystem: Intel Corporation 6 Series/C200 Series
Chipset Family PCI Express Root Port 1
Capabilities: [a0] Power Management version 2
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
Kernel driver in use: pcieport
Thanks,
Zhang Haoyu
>Alex
9, 0x9c1a, 0x9c1b,
>/* Wildcat PCH */
>0x9c90, 0x9c91, 0x9c92, 0x9c93, 0x9c94, 0x9c95, 0x9c96, 0x9c97,
>0x9c98, 0x9c99, 0x9c9a, 0x9c9b,
>/* Patsburg (X79) PCH */
>0x1d10, 0x1d12, 0x1d14, 0x1d16, 0x1d18, 0x1d1a, 0x1d1c, 0x1d1e,
>};
>
>Hopefully if you run 'lspci -n', you'll see your device ID listed among
>these. We don't currently have any quirks for PCIe switches, so if your
>IOMMU group is still bigger than it should be, that may be the reason.
>Thanks,
>
Using device specific mechanisms to enable and verify ACS-like capability is
okay,
but with regard to those devices which completely don't support ACS-like
capabilities,
what shall we do, how about applying the [PATCH] pci: Enable overrides for
missing ACS capabilities,
and how to reduce the risk of data corruption and info leakage between VMs?
Thanks,
Zhang Haoyu
>Alex
/* Lynxpoint-H PCH */
>>0x8c10, 0x8c11, 0x8c12, 0x8c13, 0x8c14, 0x8c15, 0x8c16, 0x8c17,
>>0x8c18, 0x8c19, 0x8c1a, 0x8c1b, 0x8c1c, 0x8c1d, 0x8c1e, 0x8c1f,
>>/* Lynxpoint-LP PCH */
>>0x9c10, 0x9c11, 0x9c12, 0x9c13, 0x9c14, 0x9c15, 0x9c16
Hi, Krishna, Shirley
How got get the latest patch of M:N Implementation of mulitiqueue,
I am going to test the the combination of "M:N Implementation of mulitiqueue"
and "vhost: add polling mode".
Thanks,
Zhang Haoyu
Hi, all
I use a qemu-1.4.1/qemu-2.0.0 to run win7 guest, and encounter e1000 NIC
interrupt storm,
because "if (!ent->fields.mask && (ioapic->irr & (1 << i)))" is always true in
__kvm_ioapic_update_eoi().
Any ideas?
Thanks,
Zhang Haoyu
ndition.
>
Sorry, I don't understand,
I think one interrupt should not been enabled before its handler is
successfully registered,
is it possible that e1000 emulation inject the interrupt before the interrupt
is succesfully enabled?
Thanks,
Zhang Haoyu
>e1000 emulation is far from
handler is registered. And Windows guest does not have a mechanism to
>>> detect and disable irq in such condition.
>>>
>> Sorry, I don't understand,
>> I think one interrupt should not been enabled before its handler is
>> successfully registered,
>>
gt;
>Thanks, applied to my block tree:
>https://github.com/stefanha/qemu/commits/block
>
Can we use the queued io data as caches,
io write will directly return and tell the guest the io is completed after the
io is enqueued,
better user experience for burst io,
and io-read will firstly
data as caches,
>io write will directly return and tell the guest the io is completed after the
>io is enqueued,
>better user experience for burst io,
>and io-read will firstly search the io queue, if matched data found, directly
>get the data from the queue,
>if not, then re
You are right.
Does the io merging in queue is worthy to be performed ?
Thanks,
Zhang Haoyu
>Fam
l may not be sufficient. And you can
>search e1000 in the archives and you can find some behaviour of e1000
>registers were not fictionalized like what spec said. It was really
>suggested to use virtio-net instead of e1000 in guest.
>>
Will the "[PATCH] kvm: ioapic: conditionally delay irq delivery during eoi
broadcast" add delay to virtual interrupt injection sometimes,
then some time delay sensitive applications will be impacted?
Thanks,
Zhang Haoyu
return n;
}
if (vs->csock == -1) {
vnc_disconnect_finish(vs);
} else if (sync) {
vnc_jobs_join(vs);
}
-return 0;
+return n;
Thanks,
Zhang Haoyu
tual interrupt injection sometimes,
>> then some time delay sensitive applications will be impacted?
>
>I don't test it too much but it only give a minor delay of 1% irq in the
>hope of guest irq handler will be registered shortly. But I suspect it's
>the bug of e1000 who inject the irq in the wrong time. Under what cases
>did you meet this issue?
Some scenarios, not constant and 100% reproducity,
e.g., reboot vm, ifdown e1000 nic, install kaspersky(network configuration is
performed during installing stage), .etc.
Thanks,
Zhang Haoyu
>>
>> Thanks,
>> Zhang Haoyu
= 0;
++ ioapic->irq_eoi[i] = 0;
>+ } else {
>+ ioapic_service(ioapic, i);
>+ }
>+ }
++ else {
++ ioapic->irq_eoi[i] = 0;
++
Hi, Yang, Gleb, Michael,
Could you help review below patch please?
Thanks,
Zhang Haoyu
>> Hi Jason,
>> I tested below patch, it's okay, the e1000 interrupt storm disappeared.
>> But I am going to make a bit change on it, could you help review it?
>>
>>
,base=localtime -global
kvm-pit.lost_tick_policy=discard -global PIIX4_PM.disable_s3
=1 -global PIIX4_PM.disable_s4=1
Any ideas?
Thanks,
Zhang Haoyu
offset (very large value), so the file is truncated to very
large.
Any ideas?
Thanks,
Zhang Haoyu
tmp_l1_table, l1_table, l1_size * sizeof(uint64_t));
for (i = 0; i < l1_size; i++) {
cpu_to_be64s(&tmp_l1_table[i]);
}
ret = bdrv_pwrite_sync(bs->file, l1_table_offset, tmp_l1_table,
l1_size2);
free(tmp_l1_table);
}
Thanks,
Zhang Haoyu
/task is running in guest.
Is there a plane for this work?
Thanks,
Zhang Haoyu
ve degree can be used to decide the threshold of which
vcpus belong to the same gang, just a wild thought.
> Regards,
> Wanpeng Li
>>
>> Thanks,
>> Zhang Haoyu
d_pwritev
| qcow2_co_writev
|- qcow2_alloc_cluster_link_l2
|-- qcow2_free_any_clusters
|--- qcow2_free_clusters
| update_refcount
|- qcow2_process_discards
|-- g_free(d) <== In next iteration, this Qcow2DiscardRegion will be
double-free.
Signed-off-by: Zhang Haoyu
Signed-o
When the Qcow2DiscardRegion is adjacent to another one referenced by "d",
free this Qcow2DiscardRegion metadata referenced by "p" after
it was removed from s->discards queue.
Signed-off-by: Zhang Haoyu
---
block/qcow2-refcount.c | 1 +
1 file changed, 1 insertion(+)
d
On 2014-10-12 15:34, Kevin Wolf wrote:
Am 11.10.2014 um 09:14 hat Zhang Haoyu geschrieben:
In qcow2_update_snapshot_refcount -> qcow2_process_discards() -> bdrv_discard()
may free the Qcow2DiscardRegion which is referenced by "next" pointer in
qcow2_process_discards() now, in n
bdrv_pwrite_sync(bs->file, l1_table_offset, tmp_l1_table,
>> l1_size2);
>>
>> free(tmp_l1_table);
>> }
>
>l1_table is already a local variable (local to
>qcow2_update_snapshot_refcount()), so I can't really imagine how
>introducing another local buffer should mitigate the problem, if there
>is any.
>
l1_table is not necessarily a local variable to qcow2_update_snapshot_refcount,
which depends on condition of "if (l1_table_offset != s->l1_table_offset)",
if the condition not true, l1_table = s->l1_table.
Thanks,
Zhang Haoyu
>Max
to
>> qcow2_update_snapshot_refcount,
>> which depends on condition of "if (l1_table_offset != s->l1_table_offset)",
>> if the condition not true, l1_table = s->l1_table.
>
>Oh, yes, you're right. Okay, so in theory nothing should happen anyway,
>because qcow2 does not have to be reentrant (so s->l1_table will not be
>accessed while it's big endian and therefore possibly not in CPU order).
Could you detail how qcow2 does not have to be reentrant?
In below stack,
qcow2_update_snapshot_refcount
|- cpu_to_be64s(&l1_table[i])
|- bdrv_pwrite_sync
|-- bdrv_pwrite
|--- bdrv_pwritev
| bdrv_prwv_co
|- aio_poll(aio_context) <== this aio_context is qemu_aio_context
|-- aio_dispatch
|--- bdrv_co_io_em_complete
| qemu_coroutine_enter(co->coroutine, NULL); <== coroutine entry is
bdrv_co_do_rw
bdrv_co_do_rw will access l1_table to perform I/O operation.
Thanks,
Zhang Haoyu
>But I find it rather ugly to convert the cached L1 table to big endian,
>so I'd be fine with the patch you proposed.
>
>Max
l1_size2);
>>>>>>
>>>>>>free(tmp_l1_table);
>>>>>>}
>>>>> l1_table is already a local variable (local to
>>>>> qcow2_update_snapshot_refcount()), so I can't really imagine how
>>>>
gt;>>>>> introducing another local buffer should mitigate the problem, if there
>>>>>>> is any.
>>>>>>>
>>>>>> l1_table is not necessarily a local variable to
>>>>>> qcow2_update_snapshot_refcount,
>&
Hi,
I noticed that bdrv_drain_all is performed in load_vmstate before
bdrv_snapshot_goto,
and bdrv_drain_all is performed in qmp_transaction before
internal_snapshot_prepare,
so is it also neccesary to perform bdrv_drain_all in savevm and delvm?
Thanks,
Zhang Haoyu
s?
This coroutine is also in main thread.
Am I missing something?
Thanks,
Zhang Haoyu
>Kevin
blem mentioned
above,
please see the discussing mail.
>I do see that there might be a chance of concurrency, but that doesn't
>automatically mean the requests are conflicting.
>
>Would you feel better with taking s->lock in qcow2_snapshot_delete()?
Both deleting snapshot and the coroutine of pending io read/write(bdrv_co_do_rw)
are performed in main thread, could BDRVQcowState.lock work?
Thanks,
Zhang Haoyu
>This might actually be a valid concern.
>
>Kevin
Use local variable to bdrv_pwrite_sync L1 table,
needless to make conversion of cached L1 table between
big-endian and host style.
Signed-off-by: Zhang Haoyu
---
block/qcow2-refcount.c | 22 +++---
1 file changed, 7 insertions(+), 15 deletions(-)
diff --git a/block/qcow2
() to bdrv_snapshot_delete() to avoid this problem.
Signed-off-by: Zhang Haoyu
---
block/snapshot.c | 4
1 file changed, 4 insertions(+)
diff --git a/block/snapshot.c b/block/snapshot.c
index 85c52ff..ebc386a 100644
--- a/block/snapshot.c
+++ b/block/snapshot.c
@@ -236,6 +236,10 @
hot_delete()?
>> Both deleting snapshot and the coroutine of pending io
>> read/write(bdrv_co_do_rw)
>> are performed in main thread, could BDRVQcowState.lock work?
>
>Yes. s->lock is not a mutex for threads, but a coroutine based one.
>
Yes, you are right.
>The probl
>> Use local variable to bdrv_pwrite_sync L1 table,
>> needless to make conversion of cached L1 table between
>> big-endian and host style.
>>
>> Signed-off-by: Zhang Haoyu
>> ---
>> block/qcow2-refcount.c | 22 +++---
>>
Use local variable to bdrv_pwrite_sync L1 table,
needless to make conversion of cached L1 table between
big-endian and host style.
Signed-off-by: Zhang Haoyu
---
v1 -> v2:
- remove the superflous assignment, l1_table = NULL;
- replace 512 with BDRV_SECTOR_SIZE, and align_offset with ROUND
>> Use local variable to bdrv_pwrite_sync L1 table,
>> needless to make conversion of cached L1 table between
>> big-endian and host style.
>>
>> Signed-off-by: Zhang Haoyu
>> ---
>> v1 -> v2:
>> - remove the superflous assignment, l1_table =
Use local variable to bdrv_pwrite_sync L1 table,
needless to make conversion of cached L1 table between
big-endian and host style.
Signed-off-by: Zhang Haoyu
Reviewed-by: Max Reitz
---
v2 -> v3:
- replace g_try_malloc0 with qemu_try_blockalign
- copy the latest local L1 table back t
>Use local variable to bdrv_pwrite_sync L1 table,
>needless to make conversion of cached L1 table between
>big-endian and host style.
>
>Signed-off-by: Zhang Haoyu
>Reviewed-by: Max Reitz
>---
>v2 -> v3:
> - replace g_try_malloc0 with qemu_try_blockalign
> - co
Use local variable to bdrv_pwrite_sync L1 table,
needless to make conversion of cached L1 table between
big-endian and host style.
Signed-off-by: Zhang Haoyu
Reviewed-by: Max Reitz
---
v3 -> v4:
- convert local L1 table to host-style before copy it
back to s->l1_table
v2 -> v3:
>> Use local variable to bdrv_pwrite_sync L1 table,
>> needless to make conversion of cached L1 table between
>> big-endian and host style.
>>
>> Signed-off-by: Zhang Haoyu
>> Reviewed-by: Max Reitz
>> ---
>> v3 -> v4:
>> - convert lo
Use local variable to bdrv_pwrite_sync L1 table,
needless to make conversion of cached L1 table between
big-endian and host style.
Signed-off-by: Zhang Haoyu
Reviewed-by: Max Reitz
---
v4 -> v5:
- delete superfluous check of "l1_size2 != 0"
after qemu_try_blockalign(l1_siz
Hi, Max
How is the progress of optimizing qcow2_check_metadata_overlap?
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/127037/focus=127364
Thanks,
Zhang Haoyu
agent,
which is responsible to install the applications.
Thanks,
Zhang Haoyu
t should I do?
>
>Install the applications on each clone separately, or use some other
>method to make it available (like installing on a shared network
>resource).
>
Could you detail "installing on a shared network resource"?
Thanks,
Zhang Haoyu
>> Can I rebase ima
, I think it has no business with legacy interrupt mode, right?
I am going to observe the difference of perf top data on qemu and perf kvm stat
data when disable/enable virtio-serial in guest,
and the difference of perf top data on guest when disable/enable virtio-serial
in guest,
any ideas?
oing to observe the difference of perf top data on qemu and perf kvm
>stat data when disable/enable virtio-serial in guest,
>and the difference of perf top data on guest when disable/enable virtio-serial
>in guest,
>any ideas?
>
>Thanks,
>Zhang Haoyu
>>If you restrict the number of vectors the virtio-serial device gets
>>(using the -device virtio-serial-pci,vectors= param), does that make
>>things better for you?
>>
>>
>> Amit
?
>>
>> I am going to observe the difference of perf top data on qemu and perf kvm
>> stat data when disable/enable virtio-serial in guest,
>> and the difference of perf top data on guest when disable/enable
>> virtio-serial in guest,
>> any ideas?
>
>So it's a windows guest; it could be something windows driver
>specific, then? Do you see the same on Linux guests too?
>
I suspect windows driver specific, too.
I have not test linux guest, I'll test it later.
Thanks,
Zhang Haoyu
> Amit
got back again, very obvious.
>> add comments:
>> Although the virtio-serial is enabled, I don't use it at all, the
>> degradation still happened.
>
>Using the vectors= option as mentioned below, you can restrict the
>number of MSI vectors the virtio-serial device gets. Yo
> >+* possibility to get proper irq handler
>> >+* registered. So we need to give some breath to
>> >+* guest. TODO: 1 is too long?
>> >+*/
>> >+
>> > > If virtio-blk and virtio-serial share an IRQ, the guest operating system
>> > > has to check each virtqueue for activity. Maybe there is some
>> > > inefficiency doing that.
>> > > AFAIK virtio-serial registers 64 virtqueues (on 31 ports + console) even
>> > > if everything is unused.
>>
7;include/linux/unaligned': File too large
How to resolve these errors?
Thanks,
Zhang Haoyu
Hi, Paolo, Amit,
any ideas?
Thanks,
Zhang Haoyu
On 2014-9-4 15:56, Zhang Haoyu wrote:
>>>>> If virtio-blk and virtio-serial share an IRQ, the guest operating system
>>>>> has to check each virtqueue for activity. Maybe there is some
>>>>> ineff
o suitable irq handler in case it may
register one very soon and for guest who has a bad irq detection routine ( such
as note_interrupt() in linux ), this bad irq would be recognized soon as in the
past.
Cc: Michael S. Tsirkin
Signed-off-by: Jason Wang
Signed-off-by: Zhang Haoyu
---
includ
o has a bad irq detection routine ( such
as note_interrupt() in linux ), this bad irq would be recognized soon as in the
past.
Cc: Michael S. Tsirkin
Signed-off-by: Jason Wang
Signed-off-by: Zhang Haoyu
---
include/trace/events/kvm.h | 20 ++
virt/kvm/ioapic.c | 51
o suitable irq handler in case it may
register one very soon and for guest who has a bad irq detection routine ( such
as note_interrupt() in linux ), this bad irq would be recognized soon as in the
past.
Cc: Michael S. Tsirkin
Signed-off-by: Jason Wang
Signed-off-by: Zhang Haoyu
---
includ
>> such
>> as note_interrupt() in linux ), this bad irq would be recognized soon as in
>> the
>> past.
>>
>> Cc: Michael S. Tsirkin
>> Signed-off-by: Jason Wang
>> Signed-off-by: Zhang Haoyu
>> ---
>> include/trace/events/kvm.h
o suitable irq handler in case it may
register one very soon and for guest who has a bad irq detection routine ( such
as note_interrupt() in linux ), this bad irq would be recognized soon as in the
past.
Cc: Michael S. Tsirkin
Signed-off-by: Jason Wang
Signed-off-by: Zhang Haoyu
---
includ
no suitable irq handler in case it may
>> register one very soon and for guest who has a bad irq detection routine (
>> such
>> as note_interrupt() in linux ), this bad irq would be recognized soon as in
>> the
>> past.
>
e missing "}" for if (ioapic->irq_eoi[i] ==
IOAPIC_SUCCESSIVE_IRQ_MAX_COUNT) {
Cc: Michael S. Tsirkin
Cc: Jan Kiszka
Signed-off-by: Jason Wang
Signed-off-by: Zhang Haoyu
---
include/trace/events/kvm.h | 20 +++
virt/kvm/ioapic.c | 50 +++
ical Slot: 18
>Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr-
> Stepping- SERR+ FastB2B- DisINTx+
>Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
> SERR- Interrupt: pin A routed to IRQ 10
>Region 0: I/O ports at c0c0 [size=32]
>Region 1: Memory at febd4000 (32-bit, non-prefetchable) [size=4K]
>Expansion ROM at feb8 [disabled] [size=256K]
>Capabilities: [40] MSI-X: Enable+ Count=3 Masked-
>Vector table: BAR=1 offset=
>PBA: BAR=1 offset=0800
>Kernel driver in use: virtio-pci
>Kernel modules: virtio_pci
>
>Thanks,
>Zhang Haoyu
Hi, all
I run savevm by qemu-monitor, but how to check if savevm is completed? I
haven't find the query interface.
Thanks,
Zhang Haoyu
gt;
So, only the asynchronous operations provide the query interface, like
hmp_info_migrate, right?
Thanks,
Zhang Haoyu
>Stefan
Hi, all
If I use qemu command directly to run vm, bypass libvirt, how to configure qemu
to assure that each vm has its own log file, like vmname.log?
For example, VM: rhel7-net has its own log file, rhel7-net.log, VM:rhel7-stor
has its own log file, rhel7-stor.log.
Thanks,
Zhang Haoyu
, fmt, ...),
.etc,
i.e., how to redirect the output of fprintf(stderr, fmt, ...), or some other
log-interface to a specified file?
I saw the configuration in qemuStateInitialize(), libvirt code, but now I run
the vm directly through qemu command, bypass libvirt.
Thanks,
Zhang Haoyu
>Or am I misunderstanding what you want?
log file.
But if I run a vm directly by qemu command, bypass libvirt, then how to
configure qemu to assure that each vm has its own log file,
and how to redirect the stderr, stdout to each vm's own log file?
Thanks,
Zhang Haoyu
>Cheers,
>Andreas
printf(stderr, fmt, ...)?
>Should I redirect of stderr to specified log file?
>
>In libvirt code, when start a vm(qemuProcessStart), it will create a qemu log
>file named /var/log/libvirt/qemu/vmname.log,
>and redirect the stderr and stdout to file descriptor of this qemu log file.
>
>But if I run a vm directly by qemu command, bypass libvirt, then how to
>configure qemu to assure that each vm has its own log file,
>and how to redirect the stderr, stdout to each vm's own log file?
>
>Thanks,
>Zhang Haoyu
>
>>Cheers,
>>Andreas
=1 -global
PIIX4_PM.disable_s4=1 -post win2008_iotest -enable-kvm -L /boot/pc-bios
Seen similar problem before?
Any ideas?
Thanks,
Zhang Haoyu
>> >> Hi, Vadim
>> >> I read the kvm-2012-forum paper < KVM as a Microsoft-compatible
>> >> hypervisor>,
>> >&
fff is a pretty huge value.
>
which value do you advise to use?
Thanks,
Zhang Haoyu
>Best regards,
>Vadim.
-elogfile
redirect stderr in @var{logfile}
ETEXI
then we can set the error log file through qemu command, /var/log/qemu/##.log as default.
Thanks,
Zhang Haoyu
>There's plently of tree wide work to clean up the cases where stderr
>is used where qemu_log should be. If you are finding that log
>information is going to stderr instead of the log, patches would be
>welcome.
>
>Regards,
>Peter
should be fixed. Do you have specific examples
>of information going to stderr that you would rather go to a log (be
>it an error log or something else?).
>
I use proxmox to manage vm, it dose not redirect qemu's stderr, and start vm
with -daemonize option,
so the error log disappeared
the error log disappeared.
>> I want to redirect the error log of qemu to a specified logfile, if
>> fault happened, I can use the error log to analyze the fault.
>>
>> And, why qemu output the error log to stderr instead of a error
>> logfile which can be configure?
0.897491] RIP [] __gfn_to_pfn_memslot+0x2e6/0x355 [kvm]
[0.897545] RSP
Any ideas?
Thanks,
Zhang Haoyu
> Hi, all
>
> I provide host's memory to guest by remap_pfn_range host page to qemu, and
> when guest access the page, host paniced.
>
I missed to set vma->vm_pgoff,
vma->vm_pgoff = virt_to_phys(test_mem) >> PAGE_SHIFT;
> Any ideas?
>
> Thanks,
> Zhang Haoyu
t fixes this is:
>211ea74022f51164a7729030b28eec90b6c99a08
>
See below post,please.
https://lists.gnu.org/archive/html/qemu-devel/2013-08/msg05062.html
Thanks,
Zhang Haoyu
>So 211ea740 needs to be backported to P/Q/R to fix this issue. I have a
v1 packages of a precise backport here, I've confirmed performance
diff
On 2015-01-23 07:30:19, Kashyap Chamarthy wrote:
>On Wed, Jan 21, 2015 at 11:39:44AM +0100, Paolo Bonzini wrote:
> >
> >
> > On 21/01/2015 11:32, Zhang Haoyu wrote:
> > > Hi,
>> >
> > > Does drive_mirror support incremental ba
On 2015-01-26 17:29:43, Paolo Bonzini wrote:
>
> On 26/01/2015 02:07, Zhang Haoyu wrote:
> > Hi, Kashyap
> > I've tried ‘drive_backup’ via QMP,
>> but the snapshots were missed to backup to destination,
> > I think the reason is that backup_run() only copy the
&
On 2015-01-26 19:29:03, Paolo Bonzini wrote:
>
> On 26/01/2015 12:13, Zhang Haoyu wrote:
> > Thanks, Paolo,
> > but too many internal snapshots were saved by customers,
>> switching to external snapshot mechanism has significant impaction
> > on subsequent upgrade.
,
if not found, then the c->entries.
Any idea?
Thanks,
Zhang Haoyu
On 2015-01-26 22:11:59, Max Reitz wrote:
>On 2015-01-26 at 08:20, Zhang Haoyu wrote:
> > Hi, all
> >
> > Regarding too large qcow2 image, e.g., 2TB,
> > so long disruption happened when performing snapshot,
>> which was caused by cache update and IO wait.
On 2015-01-27 09:24:13, Zhang Haoyu wrote:
>
> On 2015-01-26 22:11:59, Max Reitz wrote:
> >On 2015-01-26 at 08:20, Zhang Haoyu wrote:
>> > Hi, all
> > >
> > > Regarding too large qcow2 image, e.g., 2TB,
> > > so long disruption happened when per
fix mc146818rtc wrong subsection name to avoid vmstate_subsection_load() fail
during incoming migration or loadvm.
Signed-off-by: Zhang Haoyu
---
hw/timer/mc146818rtc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/timer/mc146818rtc.c b/hw/timer/mc146818rtc.c
index
On 2014/12/23 9:36, Fam Zheng wrote:
> On Mon, 12/22 20:21, Zhang Haoyu wrote:
>>
>> On 2014/12/22 20:05, Paolo Bonzini wrote:
>>>
>>>
>>> On 22/12/2014 12:40, Zhang Haoyu wrote:
>>>> On 2014/12/22 17:54, Paolo Bonzini wrote:
>>>&
d you detail the peeking techniques mentioned above?
>>
>> Thanks,
>> Zhang Haoyu
>
> Generally I meant virDomainMemoryPeek, but nothing prevents you to
> write code with same functionality, if libvirt usage is not preferred,
> it is only about asking monitor for chunks of memory and parse them in
> a proper way.
>
Thanks, Andrey.
Hi,
what's the status of migration support for vhost-user?
Thanks,
Zhang Haoyu
On 2014-06-18 22:07:49, Michael S. Tsirkin wrote:
> On Wed, Jun 18, 2014 at 04:37:57PM +0300, Nikolay Nikolaev wrote:
> >
> >
> >
> > On Wed,
On 2014-12-22 09:28:52, Paolo Bonzini wrote:
>
>On 22/12/2014 07:39, Zhang Haoyu wrote:
>> Hi,
>>
>> When I perform P2V from native servers with win2008 to kvm vm,
>> some cases failed due to the physical disk was using GPT for partition,
>> and QEMU doesn
1 - 100 of 145 matches
Mail list logo