Simulating an NVMe device with multiple controllers

2025-03-05 Thread David

<https://matrix.to/#/!xtcopKjjucUQThGiEn:matrix.org/$174100063219826vNiUe:matrix.org?via=matrix.org&via=mozilla.org&via=invisiblethingslab.com>

I'm trying to use Qemu to simulate an NMVe device with multiple 
controllers (is that the correct terminology?) to replicate some 
behavior that I'm seeing on a physical server.


On this server, I have two devices with a single namespace: nvme0n1 and 
a nvme1n1, but I also see a device named `nvme1c1n1` which is what I 
want to replicate.


How can I do that? Reading the NVMe documentation, I couldn't figure it out

The flags I tried were

|-device nvme,id=nvme-ctrl-0,serial=deadbeef \ -drive 
format=raw,file=disk.raw,if=none,id=nvm0 \ -device nvme-ns,drive=nvm0|


but when I do this, I the device is still "nvme0n1" without the controller


Thanks,

David


[Qemu-discuss] Combining VM checkpoints with -snapshot

2014-04-22 Thread David Wilson
Hi there,

On experimenting with Qemu for use in providing CI build slaves, I had
hoped to boot Windows, call "savevm" monitor command, then, with the
.qcow2 cached, use something similar to "qemu -snapshot -S -monitor
stdio .." followed by "loadvm" and "cont" to get a fresh, temporary VM
booted in a few seconds.

On trying this, it seems while -snapshot is in use, any VM checkpoints
from the .qcow2 are hidden. Attempting to use "qemu-img create" to
construct a temporary chained qcow2 produces the same effect. I guess
this is basically what -snapshot does internally.

Looking at the code, I can see multiple ways to nastily poke holes in
the interface to get a similar effect (at least, as far as my limited
understanding of the code goes). Before doing that, though, I was
wondering if this is already possible with the current UI?

The goal to to resume driver and RAM state from the base .qcow2, and
write changes to the temporary .qcow2, allowing fully booted short-lived
machines to be created and rapidly discarded.


Finally, and on a related note, it seems there is no technical
restriction while running under KVM that would prevent using a writeable
MAP_PRIVATE memory mapping of the guest's RAM. This would allow multiple
build slaves to share immutable parts of RAM from the snapshot, and
avoid a costly deserialization step at startup although currently Qemu's
save/load IO code works nothing like this.

Is there some technical restriction I'm missing that would prevent this?
It would allow further cutting ephemeral VM boot time from a few seconds
to perhaps a few hundred milliseconds, in addition to saving on RSS.


David



[Qemu-discuss] session parameters and error codes

2014-08-06 Thread David Brenner
Hi.

Is there an easy way to get the command line parameters for a (already
started) specific QEMU session?
I need to know if a specific image file, device or drive is already in use
by that session (32 and 64 bit).

Is there a list with QEMU error codes (each one explained)?

Thank you.

Regards,
David


[Qemu-discuss] kvm-pr support

2014-08-14 Thread David Gosselin

Hello,
I'm running linux-3.16 on a ppc64 built with kvm-pr as a module (no 
kvm-hv).  I manually modprobe the kvm-pr module after boot and realize 
performance gains on my ppc64 guest in qemu.  According to 'make 
menuconfig', kvm-pr allows some virtualization of a guest whose 
architecture does not match that of the host.  I'm wondering if qemu 
will, using this module, support a x86-64 guest on a ppc64 host.  I've 
tried to run such a configuration and have received an error that KVM is 
not supported.  I'm wondering if I built incorrectly or am not passing a 
necessary command line parameter. Understandably, qemu cannot support 
such a configuration when using kvm-hv.

Thanks,
Dave




Re: [Qemu-discuss] qemu too slow

2014-08-25 Thread David Gosselin
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Have you verified that the kernel module for KVM is installed?
$ lsmod | grep -i kvm
IIRC should show "kvm-hv" or "kvm-pr"
If you see nothing, then become root and do this:
# modprobe kvm-hv
or
# modprobe kvm


On 8/25/2014 12:27 AM, YuGiOhJCJ Mailing-List wrote:
> Hello,
> 
> I am using qemu-2.1.0 on a Slackware 14.1 operating system (with
> Linux 3.15.8).
> 
> I run qemu like this: $ qemu-img create /tmp/qemu-img 5G $ sudo
> qemu-system-i386 -boot order=d -hda /tmp/qemu-img -cdrom
> slackware-14.1-install-dvd.iso -m 1000 -enable-kvm
> 
> And qemu is very slow. After 10 minutes, nothing is displayed on
> the screen, I am not able to see the Slackware installer.
> 
> I have tested with others ISO: $ sudo qemu-system-i386 -boot
> order=d -hda /tmp/qemu-img -cdrom ubuntu-14.04.1-desktop-i386.iso
> -m 1000 -enable-kvm $ sudo qemu-system-i386 -boot order=d -hda
> /tmp/qemu-img -cdrom debian-7.6.0-i386-CD-1.iso -m 1000
> -enable-kvm But same issue...
> 
> Is there a way to understand what is happening?
> 
> Thank you. Best regards.
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQIcBAEBAgAGBQJT+yF8AAoJEKqSd/9WO0qJZCAP/RrID5IM4nKd6dBa90X4EqxV
r3bYV9SB46aQSdMNiLFnh3KCKW8aZ3j6ItK9qE3FMBE+9Uu9Kitoz5gmMI2t1MXH
ldMuQQcvk/isM7CKqPYKzHYgdtiHF6aAQW2QEUSgbB4DI1xe/Ma0Oq2DKhLgO6qi
1ICx8BidmXaijlegQ89G2yG+DSzEMl1O4lv3Yf8aMFW9xl58Lhrw6qckFOfDSL0b
TdgknOueCY2vyV2sMNQjlJCfZ6bz6ScKk34KpkSGnPtKqSLxXqm7pCbYZ8Lobefu
hIKNNVsDx3g5/6IUjta9qVOkGDuE23ENRzV+7rMuC2JahA4l6ZvjkOPKF3ah/nyO
MZQ+7+/NRVhE8VN5+46RDm+jF2oOCJxtmbvJ+fPR5lbPhJY6rvwwYd3VgLX2EI6X
UJKyGFZrfbrdXezP01xwHfsoI7V8mrT1Xd587HrB63xJigxlGS3drTjqFbk6Oz+l
mfEclws+PWbtdLNLG0+1yTCKpxfv311cjNFjdUfhyro1tNOpHxnwTAz+4xLOoWVX
cILvOGpl12xPZ2gViKJYZ7NhLjTjDleA4lpumiYPZCGss0uZi6Emuatt8q/fj8qT
1cSS51TmD+Pc0R7LIye8sI5P5bXV/5HWf9nYxJFoqHhdQPutHfVVLNRzMODnDK7c
pUgRoU2trSIwM4rU2AX/
=4Aur
-END PGP SIGNATURE-



Re: [Qemu-discuss] compile qemu-ga.exe error

2014-08-27 Thread David Gosselin
Do you have pkg-config properly installed?

> On Aug 27, 2014, at 1:08, chenjie2  wrote:
> 
> Hi,
> When I compile qemu-ga.exe on centos6.3, I've found followings,Could someone 
> help me ?
>  
> error log:
> i686-w64-mingw32-gcc: unrecognized option '-pthread'
>   CCmodule.o
> i686-w64-mingw32-gcc: unrecognized option '-pthread'
>   CCoslib-win32.o
> i686-w64-mingw32-gcc: unrecognized option '-pthread'
>   CCqga/commands.o
> i686-w64-mingw32-gcc: unrecognized option '-pthread'
>   CCqga/guest-agent-command-state.o
> i686-w64-mingw32-gcc: unrecognized option '-pthread'
>   CCqga/commands-win32.o
> i686-w64-mingw32-gcc: unrecognized option '-pthread'
>   CCqga/channel-win32.o
> i686-w64-mingw32-gcc: unrecognized option '-pthread'
> qga/channel-win32.c:205:11: error: conflicting types for 'ga_channel_read'
> ./qga/channel.h:30:11: note: previous declaration of 'ga_channel_read' was 
> here
> qga/channel-win32.c:269:11: error: conflicting types for 
> 'ga_channel_write_all'
> ./qga/channel.h:31:11: note: previous declaration of 'ga_channel_write_all' 
> was here
> make: *** [qga/channel-win32.o] 错误 1


Re: [Qemu-discuss] Start qemu with crontab

2014-08-27 Thread David Gosselin
Your invocation line says "ifname=qemu4", whereas you indicate that qemu1 
doesn't exist; is the problem here?

> On Aug 26, 2014, at 23:00, 邓尧  wrote:
> 
> Hi
> 
> I tried to start qemu-system-i386 with the following command through crontab, 
> the command works perfectly under a console or a remote shell, but fails 
> under crontab:
> 
> /opt/App/qemu/bin/qemu-system-i386 -enable-kvm -m 3072 -daemonize -net 
> nic,model=virtio,macaddr=e2:bb:aa:00:00:04 -net 
> tap,ifname=qemu4,script=no,downscript=no -usb -usbdevice host:1.11 -kernel 
> /opt/vmhost/kernel/vmlinuz -initrd /opt/vmhost/kernel/initrd.img -append 
> "kernel command line" -vnc :4,lossy /opt/vmhost/image/drive.img
> 
> The following error message was recorded in /var/spool/mail/root:
> TAP interface qemu1 missing.
> 
> TAP interface is actually created, owned by the root user. Anyone know the 
> cause of the error ? and how to fix it ?
> 
> Thanks.
> Deng Yao.



Re: [Qemu-discuss] About Qemu Console Screen Resolution

2014-09-09 Thread David Gosselin

What happens when you don't provide the "-vga cirrus" option?

On 9/9/14 6:54 AM, choongay@bench.com wrote:


Hi.

I am using Ubuntu OS.

I would like to use Qemu 2.0.0 to execute my buildroot bzImage and 
then execute Qt4 program.


I execute the Qemu using command below.

It can run but not in 1024x768 screen resolution.

If I change it to vga std, my Qt4 program cannot run.

It fail with message below.

QLinuxFbScreen::connect: Invalid argument.

Error: failed to map framebuffer device to memory.

I would like to know how to get 1024x768 screen resolution.

Anybody can help?

Thank you very much.

Command

qemu-system-i386 -kernel bzImage -initrd rootfs.cpio -m 512 -append 
root=/dev/sda1 -vga cirrus –usbdevice tablet






[Qemu-discuss] Paused guests burn cpu cycles?

2014-10-22 Thread David Yang
Was curious why even after pausing a kvm guest that the guest process would
still show 1-2% cpu usage?

I know there are many articles that talk about how even when the guests
show idle cpu, the guest process itself would show really high cpu usage
(especially if the guest is windows) and I am seeing this as well.

But shouldn't a paused guest be doing nothing at all?  From my evaluation
so far, the cpu usage is the same between an idle guest and a paused guest.
???

Can someone help?

Thanks,
David


Re: [Qemu-discuss] Qemu: AARCH64: Single Step exception does not work

2015-01-08 Thread David Long

On 01/08/15 00:32, Pratyush Anand wrote:

Hi All,

Have anyone tried to test single step exception with ARM64 on Qemu? I
was testing ARM64 uprobe patches[1] with qemu and I noticed that it does
not generate single step exception. I also tried kprobe[2], which uses
single step exception and it does not work. However, these code works
fine with real silicon.

Test case can be summarized as under:

1. After kernel code is executed, it programs ELR_EL1 with the address
of instruction which is to be single stepped. Lets say 0x7ff004 is
the address of instruction which is to be single stepped. So, ELR_EL1
has been programmed with 0x7ff004.

2. MDSCR_EL1.SS is set to 1

3. ERET has been called to execute instruction to be single stepped.

With Qemu, I always see
undefined instruction: pc=007ff008
Code: bad PC value

It seems that Qemu could not notice MDSCR_EL1.SS = 1 and since, kernel
had written a single valid instruction at location 0x7ff004, so it
raised an undefined exception while executing next invalid instruction.

My Qemu version is:

QEMU emulator version 2.1.2, Copyright (c) 2003-2008 Fabrice Bellard

You may use code in [3] to test single steping.

Please let me know, if any more input is needed to reproduce it.

~Pratyush

[1] https://lkml.org/lkml/2014/12/31/151
[2] https://lkml.org/lkml/2014/11/18/33
[3] https://github.com/pratyushanand/linux.git:ml_arm64_uprobe_devel_v2



The singlestep support in QEMU is relatively recent.  Make sure you're 
running a fairly recent QEMU.


At one point QEMU was not setting ELR_EL* properly.  I'll forward you an 
email from Peter Maydell that has some relevance.  This was fixed.


Note that you have to both set SS *and* be sure debug exceptions are 
enabled in order to get the single-step exception.  My kprobes patch [2] 
should work in that regard.  Today I plan to post a v4 version of that 
patch, but I think the only thing it fixes relative to single-stepping 
is very intermittent failures due to interrupts not being properly 
disabled (you have to disabke interrupts uring single-stepping or you 
could end up single-stepping an ingterrupt handler).


Note that testing under QEMU will not reveal SMP issues.

-dl




[Qemu-discuss] Networking: Questions about Host to Guest interal traffic management

2015-05-07 Thread David Borman

Hi,

I have a Instance running and iam curious about the internals and how 
the packets are routed from the physical (hostsystem) layer to the 
Internal, virtual Guest interface (virtio, e1000, rtl8139 ect) 
(IP/TCP/UDP/ICMP Data only).


"How" is an inboud packet, reaching the hosts physical ethernetcard 
hardware forwarded to the virtual nic inside the guest os and what is 
happening if the guest os firewall

 drops/reject/accept the packet?

1) Will the Hostsystem drop this packet physically?
2) Is the Guest OS dropping the packet at the virtual guest network 
adapter?
(and if so, what data still remains inside the Hostsystem memory 
structures and what event triggers the memory cleanup?)


ps:
If there is any good docs out there, then please let me now. On google i 
only find very generic stuff.


Thx, David



[Qemu-discuss] ..NET/ASP.NET web interface error

2015-10-27 Thread David Durham
Hello,

I am trying to run a .NET 2.0/ASP.NET application in Windows 7 running in 
qemu-system-i386 which runs fine in a regular Windows 7 system, but not in 
Qemu. I have tried using the following run commands:
 
qemu-system-i386 -m $((128*15)) -hda Win7.vmdk -vnc :0 -cpu core2duo -net nic 
-net user,hostfwd=::0-:0 -rtc driftfix=slew
 
qemu-system-i386 -m $((128*15)) -hda Win7.vmdk -vnc :0 -cpu core2duo -net nic 
-net user,hostfwd=::0-:0 -rtc driftfix=slew,base=localtime
 
qemu-system-i386 -m $((128*15)) -hda Win7.vmdk -vnc :0 -cpu core2duo -net nic 
-net user,hostfwd=::0-:0 -rtc clock=rt
 
qemu-system-i386 -m $((128*15)) -hda Win7.vmdk -vnc :0 -cpu core2duo -net nic 
-net user,hostfwd=::0-:0 -rtc driftfix=slew,base=localtime
 
qemu-system-i386 -m $((128*15)) -hda Win7.vmdk -vnc :0 -cpu core2duo -net nic 
-net user,hostfwd=::0-:0 -rtc base=localtime
 
I am getting the following error when trying to access the service's web 
interface:
 
Server Error in '/' Application.


The time span value must be positive.
Description: An unhandled exception occurred during the execution of the 
current web request. Please review the stack trace for more information about 
the error and where it originated in the code. 

Exception Details: System.ArgumentException: The time span value must be 
positive.

Source Error: 


An unhandled exception was generated during the execution of the current web 
request. Information regarding the origin and location of the exception can be 
identified using the exception stack trace below.


Stack Trace: 


 
[ArgumentException: The time span value must be positive.]
System.Configuration.PositiveTimeSpanValidator.Validate(Object value) +179
System.Configuration.ConfigurationProperty.Validate(Object value) +41
 
[ConfigurationErrorsException: The value for the property 'cookieTimeout' is 
not valid. The error is: The time span value must be positive.]
System.Configuration.ConfigurationProperty.Validate(Object value) +166
System.Configuration.ConfigurationProperty.SetDefaultValue(Object value) +126
System.Configuration.ConfigurationProperty..ctor(String name, Type type, Object 
defaultValue, TypeConverter typeConverter, ConfigurationValidatorBase 
validator, ConfigurationPropertyOptions options) +34
System.Web.Configuration.AnonymousIdentificationSection..cctor() +281
 
[TypeInitializationException: The type initializer for 
'System.Web.Configuration.AnonymousIdentificationSection' threw an exception.]
System.Runtime.CompilerServices.RuntimeHelpers._RunClassConstructor(IntPtr 
type) +0
System.Runtime.CompilerServices.RuntimeHelpers.RunClassConstructor(RuntimeTypeHandle
 type) +4
System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder 
binder, Object[] parameters, CultureInfo culture) +141
System.Reflection.ConstructorInfo.Invoke(Object[] parameters) +17
System.Configuration.TypeUtil.InvokeCtorWithReflectionPermission(ConstructorInfo
 ctor) +35
System.Configuration.RuntimeConfigurationFactory.CreateSectionImpl(RuntimeConfigurationRecord
 configRecord, FactoryRecord factoryRecord, SectionRecord sectionRecord, Object 
parentConfig, ConfigXmlReader reader) +32
System.Configuration.RuntimeConfigurationFactory.CreateSectionWithFullTrust(RuntimeConfigurationRecord
 configRecord, FactoryRecord factoryRecord, SectionRecord sectionRecord, Object 
parentConfig, ConfigXmlReader reader) +49
System.Configuration.RuntimeConfigurationFactory.CreateSection(Boolean 
inputIsTrusted, RuntimeConfigurationRecord configRecord, FactoryRecord 
factoryRecord, SectionRecord sectionRecord, Object parentConfig, 
ConfigXmlReader reader) +33
System.Configuration.RuntimeConfigurationRecord.CreateSection(Boolean 
inputIsTrusted, FactoryRecord factoryRecord, SectionRecord sectionRecord, 
Object parentConfig, ConfigXmlReader reader) +71
System.Configuration.BaseConfigurationRecord.CallCreateSection(Boolean 
inputIsTrusted, FactoryRecord factoryRecord, SectionRecord sectionRecord, 
Object parentConfig, ConfigXmlReader reader, String filename, Int32 line) +70
 
[ConfigurationErrorsException: An error occurred creating the configuration 
section handler for system.web/anonymousIdentification: The type initializer 
for 'System.Web.Configuration.AnonymousIdentificationSection' threw an 
exception.]
System.Configuration.BaseConfigurationRecord.CallCreateSection(Boolean 
inputIsTrusted, FactoryRecord factoryRecord, SectionRecord sectionRecord, 
Object parentConfig, ConfigXmlReader reader, String filename, Int32 line) +285
System.Configuration.BaseConfigurationRecord.CreateSectionDefault(String 
configKey, Boolean getRuntimeObject, FactoryRecord factoryRecord, SectionRecord 
sectionRecord, Object& result, Object& resultRuntimeObject) +80
System.Configuration.BaseConfigurationRecord.GetSectionRecursive(String 
configKey, Boolean getLkg, Boolean checkPermission, Boolean getRuntimeObject, 
Boolean requestIsHere, Object& result, Object& resultRuntimeObject) +1346
System

[Qemu-discuss] Guest Self-Suspension

2015-11-20 Thread Guy David
I am running a FreeDOS guest (used as a gaming platform) on a Windows host. I 
wish a specific .bat script to run as soon as I resume the VM.My current 
attempts aim to somehow utilize the monitor from the guest itself and request 
it to stop the emulation. The only thing which comes to mind is a communicating 
by a COM port, but I can not seem to grasp how to do so.
Any thoughts?
Thank you,Guy 

[Qemu-discuss] Getting qemu-system-i386 to use more than one core on Cortex A7 host

2016-01-03 Thread David Durham
Any suggestions or comments on how to do this are very welcome ... I built qemu 
with --target-list i386-softmmu and when I run qemu, top only shows one 
qemu-system-i386 using 100% of one core ... thanks



[Qemu-discuss] Specifying ACPI tables

2016-08-03 Thread David Renz
Hello everyone,

I want to test the effects of the ACPI table's code extracted from my
system on a Linux system running on QEMU by comparing it with an
identically configured VM system which doesn't use this ACPI code.
Howevever, I always get the error message that the maximum size for the
ACPI code is limited to 64kb, which seems to be a known issue. Is there any
solution or work-around for this? And why does this limitation exist
anyway? By the way, it's the same in the case of VirtualBox.


Thanks in advance and kind regards

David


[Qemu-discuss] Running callbacks on instruction fetches and data accesses

2016-12-13 Thread David Vernet
Hello Qemu users,

I am interested in using Qemu for a research project of mine, and I was
curious if it was possible to, for a kernel running on Qemu, run a callback
on instruction fetches and data accesses. In the context of Simics (an x86
emulator), this can be accomplished by creating a module with Simics as
such:

```
/* Initialize our Simics module. */
void init_local(void)
{
const class_data_t funcs = {
.new_instance = ls_new_instance,
.class_desc = "desc",
.description = "A simics module."
};

/* Register the empty device class. */
conf_class_t *conf_class = SIM_register_class(SIM_MODULE_NAME, &funcs);

/* Register our class class as a trace consumer. */
static const trace_consume_interface_t trace_int = {
.consume = (void (*)(conf_object_t *, trace_entry_t *))my_tool_entrypoint
};
SIM_register_interface(conf_class, TRACE_CONSUME_INTERFACE, &trace_int);
}
```

By doing this, Simics will call `my_tool_entrypoint` on every instruction
and every data access; allowing me to instrument the kernel I'm running as
I see fit. Is such a feature available for a guest OS running on Qemu? I
see that there is some kind of tracing utility (
http://git.qemu-project.org/?p=qemu.git;a=blob_plain;f=docs/tracing.txt),
but from reading it, it doesn't seem like it's possible to do what I'm
looking for. We can modify the OS to hook into Qemu however we need. I am
aware that this would result in a gigantic performance hit, but that is
acceptable for my use case.

If this feature is not currently available, and it sounds like something
others would want, it is something I would possibly be able to spend a lot
of time on after the New Year adding it to Qemu assuming it probably
wouldn't take less than a couple of months.

Thanks in advance for your time and assistance. Please let me know if this
is a message that I should forward to qemu-devel and I will do that instead.

Regards,

David Vernet


[Qemu-discuss] Run ex VMWare image on QEMU with same virtual hardware ?

2017-05-18 Thread David Timms
Hi, I have an old WXP vmware machine I want use in qemu instead. I
managed to:
- convert the disk image
- which BSOD during boot, and found how to:
- use virt/guestfish to add the IDE drivers to the image offline.
- can now boot, login. But the machine thinks it has new hardware.
- and OS wants to reactivate.

I am hoping there is a way to run/config/define the qemu emulation to
use identical hardware to vmware. Vmware .vmx says:
config.version = "8"

virtualHW.version = "4"

scsi0.present = "TRUE"

memsize = "1552"
ide0:0.present = "TRUE"
sound.virtualDev = "es1371"
numvcpus = "2"

Is it possible ?

Cheers Dave.



Re: [Qemu-discuss] ppc and icount

2018-01-11 Thread David Gibson
On Wed, Jan 10, 2018 at 10:34:18AM +, Peter Maydell wrote:
> On 10 January 2018 at 08:57, Steven Seeger
>  wrote:
> > Sorry for another post. I did a bisect and found what is the bad commit for
> > me:
> >
> > 044897ef4a22af89aecb8df509477beba0a2e0ce is the first bad commit
> > commit 044897ef4a22af89aecb8df509477beba0a2e0ce
> > Author: Richard Purdie 
> > Date:   Mon Dec 4 22:25:43 2017 +
> >
> > target/ppc: Fix system lockups caused by interrupt_request state
> > corruption
> 
> Great -- thanks for the bisect. Let's take this to the -devel list;
> I've cc'd the PPC maintainers.
> 
> Context: Steven reports that we broke -icount for PPC guests with
> this commit:
> 
> $ ./build/all/ppc-softmmu/qemu-system-ppc  -icount auto
> qemu: fatal: Raised interrupt while not in I/O function
> NIP fff08978   LR fff08904 CTR  XER  CPU#0
> MSR  HID0   HF  iidx 3 didx 3
> Bad icount read
> 
> The backtrace from the assert is:
> 
> #0  tcg_handle_interrupt (cpu=0x77fc2010, mask=4) at qemu/accel/tcg/tcg-
> all.c:58
> #1  0x55962aa4 in cpu_interrupt (cpu=0x77fc2010, mask=4) at qemu/
> include/qom/cpu.h:859
> #2  0x55962e55 in cpu_interrupt_exittb (cs=0x77fc2010) at qemu/
> target/ppc/helper_regs.h:105
> #3  0x55964505 in do_rfi (env=0x77fca2b0, nip=197460, msr=4096)
> at qemu/target/ppc/excp_helper.c:998
> #4  0x55964555 in helper_rfi (env=0x77fca2b0) at qemu/target/ppc/
> excp_helper.c:1008
> #5  0x7fffe7c124b9 in code_gen_buffer ()
> 
> The problem is that icount was relying on the previous
> handling of do_rfi() as "just set state as we know we're
> going to be last insn in the TB".
> 
> Not sure how best to fix this (mark the insn as IO ok?)

Aw, man.  I've become target-ppc tcg maintainer by default, but tbh my
knowledge wasn't really deep enough to understand the problem that
044897ef was fixing in the first place.  And I barely know what icount
does at all.

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [Qemu-discuss] qemu-img convert stuck

2018-04-11 Thread David Lee
On Mon, Apr 9, 2018 at 3:35 AM, Benny Zlotnik  wrote:

> $ gdb -p 13024 -batch -ex "thread apply all bt"
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> 0x7f98275cfaff in ppoll () from /lib64/libc.so.6
>
> Thread 1 (Thread 0x7f983e30ab00 (LWP 13024)):
> #0  0x7f98275cfaff in ppoll () from /lib64/libc.so.6
> #1  0x55b55cf59d69 in qemu_poll_ns ()
> #2  0x55b55cf5ba45 in aio_poll ()
> #3  0x55b55ceedc0f in bdrv_get_block_status_above ()
> #4  0x55b55cea3611 in convert_iteration_sectors ()
> #5  0x55b55cea4352 in img_convert ()
> #6  0x55b55ce9d819 in main ()


My team caught this issue too after switching to CentOS 7.4 with qemu-img
2.9.0
gdb shows exactly the same backtrace when the convert stuck, and we are on
NFS.

Later we found the following:
1. The stuck can happen on local storage, too.
2. Replace qemu-img 2.9.0 with 2.6.0 and everything works smoothly again.

BTW, we use "qemu-img convert" to convert qcow2 and its backing files into
a single qcow2 image.


> On Sun, Apr 8, 2018 at 10:28 PM, Nir Soffer  wrote:
>
> > On Sun, Apr 8, 2018 at 9:27 PM Benny Zlotnik 
> wrote:
> >
> >> Hi,
> >>
> >> As part of copy operation initiated by rhev got stuck for more than a
> day
> >> and consumes plenty of CPU
> >> vdsm 13024  3117 99 Apr07 ?1-06:58:43 /usr/bin/qemu-img
> >> convert
> >> -p -t none -T none -f qcow2
> >> /rhev/data-center/bb422fac-81c5-4fea-8782-3498bb5c8a59/
> >> 26989331-2c39-4b34-a7ed-d7dd7703646c/images/597e12b6-
> >> 19f5-45bd-868f-767600c7115e/62a5492e-e120-4c25-898e-9f5f5629853e
> >> -O raw /rhev/data-center/mnt/mantis-nfs-lif1.lab.eng.tlv2.redhat.com:
> >> _vol__service/26989331-2c39-4b34-a7ed-d7dd7703646c/images/
> >> 9ece9408-9ca6-48cd-992a-6f590c710672/06d6d3c0-beb8-
> 4b6b-ab00-56523df185da
> >>
> >> The target image appears to have no data yet:
> >> qemu-img info 06d6d3c0-beb8-4b6b-ab00-56523df185da"
> >> image: 06d6d3c0-beb8-4b6b-ab00-56523df185da
> >> file format: raw
> >> virtual size: 120G (128849018880 bytes)
> >> disk size: 0
> >>
> >> strace -p 13024 -tt -T -f shows only:
> >> ...
> >> 21:13:01.309382 ppoll([{fd=12, events=POLLIN|POLLERR|POLLHUP}], 1, {0,
> >> 0},
> >> NULL, 8) = 0 (Timeout) <0.10>
> >> 21:13:01.309411 ppoll([{fd=12, events=POLLIN|POLLERR|POLLHUP}], 1, {0,
> >> 0},
> >> NULL, 8) = 0 (Timeout) <0.09>
> >> 21:13:01.309440 ppoll([{fd=12, events=POLLIN|POLLERR|POLLHUP}], 1, {0,
> >> 0},
> >> NULL, 8) = 0 (Timeout) <0.09>
> >> 21:13:01.309468 ppoll([{fd=12, events=POLLIN|POLLERR|POLLHUP}], 1, {0,
> >> 0},
> >> NULL, 8) = 0 (Timeout) <0.10>
> >>
> >> version: qemu-img-rhev-2.9.0-16.el7_4.13.x86_64
> >>
> >> What could cause this? I'll provide any additional information needed
> >>
> >
> > A backtrace may help, try:
> >
> > gdb -p 13024 -batch -ex "thread apply all bt"
> >
> > Also adding Kevin and qemu-block.
> >
> > Nir
> >
>


-- 
Thanks,
Li Qun


Re: [Qemu-discuss] qemu-img convert stuck

2018-04-11 Thread David Lee
On Thu, Apr 12, 2018 at 10:03 AM, Fam Zheng  wrote:

> On Thu, 04/12 09:51, David Lee wrote:
> > On Mon, Apr 9, 2018 at 3:35 AM, Benny Zlotnik 
> wrote:
> >
> > > $ gdb -p 13024 -batch -ex "thread apply all bt"
> > > [Thread debugging using libthread_db enabled]
> > > Using host libthread_db library "/lib64/libthread_db.so.1".
> > > 0x7f98275cfaff in ppoll () from /lib64/libc.so.6
> > >
> > > Thread 1 (Thread 0x7f983e30ab00 (LWP 13024)):
> > > #0  0x7f98275cfaff in ppoll () from /lib64/libc.so.6
> > > #1  0x55b55cf59d69 in qemu_poll_ns ()
> > > #2  0x55b55cf5ba45 in aio_poll ()
> > > #3  0x55b55ceedc0f in bdrv_get_block_status_above ()
> > > #4  0x55b55cea3611 in convert_iteration_sectors ()
> > > #5  0x55b55cea4352 in img_convert ()
> > > #6  0x55b55ce9d819 in main ()
> >
> >
> > My team caught this issue too after switching to CentOS 7.4 with qemu-img
> > 2.9.0
> > gdb shows exactly the same backtrace when the convert stuck, and we are
> on
> > NFS.
> >
> > Later we found the following:
> > 1. The stuck can happen on local storage, too.
> > 2. Replace qemu-img 2.9.0 with 2.6.0 and everything works smoothly again.
> >
> > BTW, we use "qemu-img convert" to convert qcow2 and its backing files
> into
> > a single qcow2 image.
>
> Maybe it is RHBZ 1508886?
>
> Fam
>


Thanks, Fam.  We just tracked down to this BZ too and are about to trying
the commit ef6dada8b44e1e7c4bec5c1115903af9af415b50


-- 
Thanks,
Li Qun


Re: [Qemu-discuss] qemu-img convert stuck

2018-04-12 Thread David Lee
On Thu, Apr 12, 2018 at 10:16 AM, David Lee  wrote:
>> > My team caught this issue too after switching to CentOS 7.4 with qemu-img
>> > 2.9.0
>> > gdb shows exactly the same backtrace when the convert stuck, and we are on
>> > NFS.
>> >
>> > Later we found the following:
>> > 1. The stuck can happen on local storage, too.
>> > 2. Replace qemu-img 2.9.0 with 2.6.0 and everything works smoothly again.
>> >
>> > BTW, we use "qemu-img convert" to convert qcow2 and its backing files into
>> > a single qcow2 image.
>>
>> Maybe it is RHBZ 1508886?
>>
>> Fam
>
>
>
> Thanks, Fam.  We just tracked down to this BZ too and are about to trying
> the commit ef6dada8b44e1e7c4bec5c1115903af9af415b50

We tested qemu-kvm-ev-2.9.0-16.el7_4.14.1 - where from the source RPM we
verified it does contain ef6dada8b44e1e7c4bec5c1115903af9af415b50

But the issue still exists.  The convert got stuck if one of the old
active overlay
had been 'vol-resize'd  with qemu monitor command to a larger size.  This looks
like a prerequisite but not sufficient condition to trigger this badness.

-- 
Thanks,
Li Qun



Re: [Qemu-discuss] qemu-img convert stuck

2018-04-12 Thread David Lee
On Thu, Apr 12, 2018 at 10:23 PM, Fam Zheng  wrote:
> On Thu, 04/12 21:45, David Lee wrote:
>> On Thu, Apr 12, 2018 at 10:16 AM, David Lee  wrote:
>> >> > My team caught this issue too after switching to CentOS 7.4 with 
>> >> > qemu-img
>> >> > 2.9.0
>> >> > gdb shows exactly the same backtrace when the convert stuck, and we are 
>> >> > on
>> >> > NFS.
>> >> >
>> >> > Later we found the following:
>> >> > 1. The stuck can happen on local storage, too.
>> >> > 2. Replace qemu-img 2.9.0 with 2.6.0 and everything works smoothly 
>> >> > again.
>> >> >
>> >> > BTW, we use "qemu-img convert" to convert qcow2 and its backing files 
>> >> > into
>> >> > a single qcow2 image.
>> >>
>> >> Maybe it is RHBZ 1508886?
>> >>
>> >> Fam
>> >
>> >
>> >
>> > Thanks, Fam.  We just tracked down to this BZ too and are about to trying
>> > the commit ef6dada8b44e1e7c4bec5c1115903af9af415b50
>>
>> We tested qemu-kvm-ev-2.9.0-16.el7_4.14.1 - where from the source RPM we
>> verified it does contain ef6dada8b44e1e7c4bec5c1115903af9af415b50
>>
>> But the issue still exists.  The convert got stuck if one of the old
>> active overlay
>> had been 'vol-resize'd  with qemu monitor command to a larger size.  This 
>> looks
>> like a prerequisite but not sufficient condition to trigger this badness.
>
> So it is a separate issue. Did you try upstream master as well?
>
> Fam

Not yet.

-- 
Thanks,
Li Qun



Re: [Qemu-discuss] qemu-img convert stuck

2018-04-18 Thread David Lee
On Thu, Apr 12, 2018 at 11:57 PM, David Lee  wrote:
>>>
>>> We tested qemu-kvm-ev-2.9.0-16.el7_4.14.1 - where from the source RPM we
>>> verified it does contain ef6dada8b44e1e7c4bec5c1115903af9af415b50
>>>
>>> But the issue still exists.  The convert got stuck if one of the old
>>> active overlay
>>> had been 'vol-resize'd  with qemu monitor command to a larger size.  This 
>>> looks
>>> like a prerequisite but not sufficient condition to trigger this badness.
>>
>> So it is a separate issue. Did you try upstream master as well?
>>
>> Fam
>
> Not yet.

Stefan & FAM,

Here are the steps to reproduce this issue reliably:

# qemu-img create -f qcow2 test.qcow2 100m
... omitted
# qemu-img create -F qcow2 -f qcow2 -b test.qcow2 overlay.qcow2
... omitted
# qemu-img resize overlay.qcow2 +20m
Image resized.
# qemu-img create -F qcow2 -f qcow2 -b overlay.qcow2 overlay2.qcow2
... omitted
# qemu-img convert overlay2.qcow2 -f qcow2 -O qcow2 combined.qcow2
[hang]


# qemu-img --version
qemu-img version 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.14.1)

-- 
Thanks,
Li Qun



Re: [Qemu-discuss] qemu-img convert stuck

2018-04-18 Thread David Lee
On Wed, Apr 18, 2018 at 4:44 PM, Fam Zheng  wrote:
>
> qemu-img hangs because the convert_iteration_sectors loop cannot make any
> progress when it reaches the end of the base image. It is a bug (implicitly?)
> fixed by Eric Blake (Cc'ed) 's BDRV_BLOCK_EOF patches on upstream, backporting
> them to the above downstream version fixes the problem for me:
>
> commit c61e684e44272f2acb2bef34cf2aa234582a73a9
> Author: Eric Blake 
>
> block: Exploit BDRV_BLOCK_EOF for larger zero blocks
>
> commit fb0d8654ffc3ea1494067327fc4c4da5d0872724
> Author: Eric Blake 
>
> block: Add BDRV_BLOCK_EOF to bdrv_get_block_status()
>
> Fam

Fam,

Thanks for the info.

-- 
Thanks,
Li Qun



[Qemu-discuss] One question about "drive-backup"

2018-08-22 Thread David Lee
Hi, list

I was trying "drive-backup" for incremental backups, and so far so
good for individual drives.
The question is that, when I grouped the drive-backup into a
transaction, I got errors like:

{"id":"libvirt-377183","error":{"class":"GenericError","desc":"a
sync_bitmap was provided to backup_run, but received an incompatible
sync_mode (top)"}}

It is also the case when sync mode is changed to 'full'.  And the only
valid mode seems to be
'incremental', which is exactly what's documented in:
  
https://github.com/qemu/qemu/blob/master/docs/interop/bitmaps.rst#partial-transactional-failures

But, there are no warnings saying that only 'incremental' sync mode
can be grouped into a transaction.

Any suggestion on taking multiple drive-backups without sync mode =
'incremental'?

Thanks in advance.


PS. here is the snippet of my QMP command:
cut begin
{
 "execute": "transaction",
 "arguments": {
   "actions": [
 {
   "type": "drive-backup",
   "data": {
 "device": "drive-virtio-disk0",
 "target": "/opt/backups/82faf05a-2cad-43a7-b9d6-9c78075534fe.qcow2",
 "bitmap": "bitmap0",
 "job-id": "job-516",
 "sync": "top",
 "format": "qcow2"
   }
 },
 {
   "type": "drive-backup",
   "data": {
 "device": "drive-scsi0-0-0-1",
 "target": "/opt/backups/6ee3e2d4-6e40-49d5-b15f-ff20c0aafbee.qcow2",
 "bitmap": "bitmap1",
 "job-id": "job-517",
 "sync": "top",
 "format": "qcow2"
   }
 }
   ],
}
cut ends

-- 
Thanks,
Li Qun



Re: [Qemu-discuss] One question about "drive-backup"

2018-08-22 Thread David Lee
On Wed, Aug 22, 2018 at 11:41 PM David Lee  wrote:
> If I repeat the snippet below, there will be error message:
> {"id":"libvirt-381290","error":{"class":"GenericError","desc":"Need a
> root block node"}}
>
> What's happening?

I guess it is because the previous backup jobs were still in progress.

-- 
Thanks,
Li Qun



Re: [Qemu-discuss] One question about "drive-backup"

2018-08-22 Thread David Lee
Thank you, Fam.

I did the following, and this time QEMU doesn't complain anything.
But what's the semantics of omitting the "bitmap" parameters?
And, how to continue with incremental backups after that?

If I repeat the snippet below, there will be error message:
{"id":"libvirt-381290","error":{"class":"GenericError","desc":"Need a
root block node"}}

What's happening?

Thanks for your patience.

```
{
 "execute": "transaction",
 "arguments": {
   "actions": [
 {
   "type": "drive-backup",
   "data": {
 "device": "drive-virtio-disk0",
 "target": "/tmp/vda-inc.qcow2",
 "job-id": "bak-job1",
 "sync": "top",
 "format": "qcow2"
   }
 },
 {
   "type": "drive-backup",
   "data": {
 "device": "drive-scsi0-0-0-1",
 "target": "/tmp/sdb-inc.qcow2",
 "job-id": "bak-job2",
 "sync": "top",
 "format": "qcow2"
   }
 }
   ],
```
On Wed, Aug 22, 2018 at 10:59 PM Fam Zheng  wrote:
>
> On Wed, 08/22 22:20, David Lee wrote:
> > Hi, list
> >
> > I was trying "drive-backup" for incremental backups, and so far so
> > good for individual drives.
> > The question is that, when I grouped the drive-backup into a
> > transaction, I got errors like:
> >
> > {"id":"libvirt-377183","error":{"class":"GenericError","desc":"a
> > sync_bitmap was provided to backup_run, but received an incompatible
> > sync_mode (top)"}}
> >
> > It is also the case when sync mode is changed to 'full'.  And the only
> > valid mode seems to be
> > 'incremental', which is exactly what's documented in:
> >   
> > https://github.com/qemu/qemu/blob/master/docs/interop/bitmaps.rst#partial-transactional-failures
> >
> > But, there are no warnings saying that only 'incremental' sync mode
> > can be grouped into a transaction.
> >
> > Any suggestion on taking multiple drive-backups without sync mode =
> > 'incremental'?
>
> Drop the 'bitmap' parameters if you're not doing incremental backup?
>
> Fam
>
> >
> > Thanks in advance.
> >
> >
> > PS. here is the snippet of my QMP command:
> > cut begin
> > {
> >  "execute": "transaction",
> >  "arguments": {
> >"actions": [
> >  {
> >"type": "drive-backup",
> >"data": {
> >  "device": "drive-virtio-disk0",
> >  "target": 
> > "/opt/backups/82faf05a-2cad-43a7-b9d6-9c78075534fe.qcow2",
> >  "bitmap": "bitmap0",
> >  "job-id": "job-516",
> >  "sync": "top",
> >  "format": "qcow2"
> >}
> >  },
> >  {
> >"type": "drive-backup",
> >"data": {
> >  "device": "drive-scsi0-0-0-1",
> >  "target": 
> > "/opt/backups/6ee3e2d4-6e40-49d5-b15f-ff20c0aafbee.qcow2",
> >  "bitmap": "bitmap1",
> >  "job-id": "job-517",
> >  "sync": "top",
> >  "format": "qcow2"
> >}
> >  }
> >],
> > }
> > cut ends
> >
> > --
> > Thanks,
> > Li Qun
> >



-- 
Thanks,
Li Qun



[Qemu-discuss] QEMU for Solariis SPARC

2013-02-28 Thread David Rayner
Hi

I'd like to create a virtual image of a Solaris SPARC 5.8 server ; I need to 
keep its host application for reference purposes. Using Centos 6.3, I have 
installed VirtualBox 4.2.6

How can I use qemu to create a virtual image of this sparc system to be used 
within VirtualBox

Thanks in advance

David


This e-mail is from Gladedale Group Limited, registered in England with 
registered number 4113678.
Our registered office is 30 High Street,Westerham,Kent,TN16 1RG

This message is intended only for the named addressee(s) and may contain 
confidential and/or privileged 
information. If you are not the named addressee you should not disseminate, 
copy or take any action
or place any reliance on it. If you have received this message in error, please 
return to the sender by 
replying to it and then delete the message and any attachments accompanying it 
from your computer immediately.

We may monitor all e-mails sent to or from this or any other office of the firm 
for compliance with our internal 
policies.Internet e-mail is not necessarily secure. We do not accept 
responsibility for changes made to this message 
after it was sent.

[Qemu-discuss] Windows Server 2008 R2 slow printing

2013-04-18 Thread David Gempton
Just wanted to capture this...

I have a QEMU host running a mix of Linux and Windows virtual machines.

The Windows Server 2008 R2 virtual machines had became very slow to print.  It 
was taking up to 2 minutes to get a simple page out of the printer.  Seemed to 
be an issue with the spooler completing the delivery of the job to the printer. 
(Not sure why they slowed down.  They had been quick in the past).

To fix this I installed TAP network devices and gave the Windows machines their 
own IP addresses on the LAN.  This enabled me to install new printer queues 
that connected directly to the network printers.  Previous connections had been 
via the CUPS spooler on the QEMU host.

Using the TAP device instead of port redirection was also faster for Terminal 
Services Client (RDP) to connect to the Windows machines.

Little Note#  All machines on the LAN must have unique MAC addresses.  If you 
see strange results when you ping your VM's using TAP network devices take a 
look at the MAC addresses.  I ran in to this and wasted a couple of hours 
before I checked for duplicate MAC's.


In short.  I suggest using TAP rather than port redirect.  Seems to be 
faster and more stable and reliable.

Thanks
Dave.






[Qemu-discuss] video modes

2013-07-16 Thread David Oberhollenzer
My host OS is Arch linux with qemu 1.5.1 installed.

I'm using the qemu sdl display with option "-vga std".

When running a Windows guest, it automatically switches to the
highest possible screen resolution, which tends to be rediculously
large in comparison to my actual screen size and cannot be run in
fullscreen.

Also, the list of modes available to the guest, starting from 320*240
up, is quite long and contains many that are not in the list reported
by xrandr on the host system.

Is there a way to somehow limit the video modes to a maximum size? Can
qemu be made to just report actually supported modes, or the mode list
otherwise altered?

Thanks,
Dave



Re: [Qemu-discuss] video modes

2013-07-18 Thread David Oberhollenzer
Answering to myself:

I found the hardcoded maximum screen and color resoulutions in vga_int.h
and hacked into commandline arguments.

That should do it.
Patch follows soon.

Dave



[Qemu-discuss] Windows 8 64 bit + hda sound is distorted

2013-08-23 Thread David Woodfall

The sound in my windows 8 VM is very distorted. Any clues on what
could be the problem? My startup script:

export QEMU_ALSA_ADC_DEV="dsp1"
export QEMU_ALSA_DAC_DEV="dsp1"
export QEMU_AUDIO_DRV="alsa"
export QEMU_ALSA_DAC_BUFFER_SIZE=8192

qemu-kvm \
   -enable-kvm \
   -cpu host \
   -m 1024 \
   -soundhw hda \
   -hda ~/qemu/win8/8.img \
   -vga cirrus \
   -boot order=c \
   -smb /tmp \
   -sdl

As you see, I've tried upping the buffer size but it hasn't made any difference.







qemu wont load hvmloader as pc bios

2019-09-26 Thread David Lai
Hi  I hope this is the correct mailing list to send to.  If not please let me 
know a better channel to send this.  I havent been able to google anything.
Also let  me know if I should enable additional logging or whatever.

I have a VM under KVM which loads its kernel from the host.  This works 
fine on qemu-0.15 qemu-kvm-0.15 on fedora 16 (old)  but is failing to load
using qemu-2.6.2/qemu-kvm-2.6.2 on fedora 24 (also a bit old) and I also
tested it with qemu-kvm-3.1.1 on fedora 30 (recent) and failed too.

the VM XML looks something like this:

  
hvm
/usr/lib/xen/boot/hvmloader
/kvm-boot/vmlinuz-2.4.20-37.7.legacy
ro root=/dev/hda1 swap=/dev/hda2 panic=30 nousb console=ttyS0,11520
0n8

  


The message I get is:
error: Failed to start domain tuna
error: internal error: process exited while connecting to monitor: qemu: could 
not load PC BIOS '/usr/lib/xen/boot/hvmloader'


I checked and the file /usr/lib/xen/boot/hvmloader is present and readable
I even tried to copy the file /usr/lib/xen/boot/hvmloader from the old
fedora16 working server and it still fails to load with the same error.

This is the the logfile on fedora30:

2019-09-26 06:42:34.474+: starting up libvirt version: 5.1.0, package: 
9.fc30 (Fedora Project, 2019-06-20-16:43:58, ), qemu version: 
3.1.1qemu-3.1.1-1.fc30, kernel: 5.2.7-200.fc30.x86_64, hostname: polaris
LC_ALL=C \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \
QEMU_AUDIO_DRV=none \
/usr/bin/qemu-kvm \
-name guest=tuna,debug-threads=on \
-S \
-object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-4-tuna/master-key.aes
 \
-machine pc-0.14,accel=kvm,usb=off,dump-guest-core=off \
-bios /usr/lib/xen/boot/hvmloader \
-m 512 \
-realtime mlock=off \
-smp 1,sockets=1,cores=1,threads=1 \
-uuid 30ef2a08-df2c-47be-9ee0-fef3fc65b26a \
-display none \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=32,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=utc \
-no-shutdown \
-boot strict=on \
-kernel /kvm-boot/vmlinuz-2.4.20-37.7.legacy \
-append 'ro root=/dev/hda1 swap=/dev/hda2 panic=30 nousb 
console=ttyS0,115200n8' \
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
-drive file=/dev/data/tuna,format=raw,if=none,id=drive-ide0-0-0,cache=writeback 
\
-device 
ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1,write-cache=on
 \
-netdev tap,fd=34,id=hostnet0 \
-device 
rtl8139,netdev=hostnet0,id=net0,mac=00:16:36:54:40:16,bus=pci.0,addr=0x4 \
-chardev pty,id=charserial0 \
-device isa-serial,chardev=charserial0,id=serial0 \
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 \
-sandbox 
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
char device redirected to /dev/pts/3 (label charserial0)
qemu: could not load PC BIOS '/usr/lib/xen/boot/hvmloader'

-- 
"Vomunt et edant, edunt et vomant"
   Seneca (4 BC - 65 AD)



Re: 5.0.0-rc3 : Opcode 1f 12 0f 00 (7ce003e4) leaked temporaries

2020-04-17 Thread David Gibson
On Fri, Apr 17, 2020 at 10:01:53AM +0100, Peter Maydell wrote:
> On Fri, 17 Apr 2020 at 01:43, Dennis Clarke via  
> wrote:
> >
> >
> > Very strange messages from qemu 5.0.0-rc3 wherein I try to run :
> 
> Thanks for the report. Did this work with older QEMU?
> 
> > $ /usr/local/bin/qemu-system-ppc64 --version
> > QEMU emulator version 4.2.93
> > Copyright (c) 2003-2020 Fabrice Bellard and the QEMU Project developers
> > $
> > $
> > $ /usr/local/bin/qemu-system-ppc64 \
> >  > -machine pseries-4.1 -cpu power9 -smp 4 -m 12G -accel tcg \
> >  > -drive file=/home/ppc64/ppc64le.qcow2 \
> >  > -device virtio-net-pci,netdev=usernet \
> >  > -netdev user,id=usernet,hostfwd=tcp::1-:22 \
> >  > -serial stdio -display none -vga none
> > qemu-system-ppc64: warning: TCG doesn't support requested feature,
> > cap-cfpc=workaround
> > qemu-system-ppc64: warning: TCG doesn't support requested feature,
> > cap-sbbc=workaround
> > qemu-system-ppc64: warning: TCG doesn't support requested feature,
> > cap-ibs=workaround
> >
> >
> > SLOF **
> 
> [kernel boot log snipped]
> 
> 
> > root@titan:~#
> >
> >  From this point onwards I see an endless stream of :
> >
> > Opcode 1f 12 0f 00 (7ce003e4) leaked temporaries
> 
> > No idea what that is .. but it doesn't look friendly.
> >
> > Also I did compile qemu with --enable-debug --disable-strip and the
> > performance is truely horrific.  I can only assume that those options
> > are the cause. Any thoughts from anyone would be wonderful.
> 
> Well, you turned on debug and you got some warnings
> which are only emitted with debug on, so you can
> work around it by not doing that :-) And yes, debug
> is slower (it builds QEMU without optimization enabled
> so it's easier to debug QEMU in gdb, and it turns on
> various extra sanity checks.)
> 
> The warning is something we should fix -- it's a bug
> in the PPC code generation where we didn't correctly
> free a TCG temporary. The good news is that this won't
> generally have any visible bad effects, because the
> TCG common code will clean all those temporaries up
> at the end of each block anyway. The only time the leak
> is an issue is if guest code has a straight line sequence
> of hundreds of the same instruction in a row, in which
> case they'll all be in the same block and we might
> hit the limit on total temporaries. That won't happen
> unless guest code is deliberately doing something crazy.
> 
> David -- is this a known bug?

Not known to me.

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


basic qemu question

2020-06-04 Thread David Beccue
I have a pretty basic question about how qemu works... I have an analysis
library (no source) for ARM Cortex M3 processor that I'd like to run on
many files. My hardware would be very slow doing this. Will QEMU be able to
run this faster (assuming a fast PC, ofc)?

-David


basic qemu question

2020-06-04 Thread David Beccue
I have a pretty basic question about how qemu works... I have an analysis
library (no source) for ARM Cortex M3 processor that I'd like to run on
many files. My hardware would be very slow doing this. Will QEMU be able to
run this faster (assuming a fast PC, ofc)?

Cheers,
David

- - -
David Beccue


Re: basic qemu question

2020-06-12 Thread David Beccue
Thanks for your thoughts on this.

Yes, the analysis is just a library without src code that I compile into my
firmware that runs on a Cortex-M3.

With close to speed parity, I guess I could at least run a bunch in
parallel on my multicore AMD 1700x more easily than loading the code in 16
different hardware devices.  Plus, I assume that QEMU may have a way that I
can send debug output to my host machine to store in files there, yes?

On Fri, Jun 12, 2020 at 10:29 AM Alex Bennée  wrote:

> The following message is a courtesy copy of an article
> that has been posted to gmane.comp.emulators.qemu.user as well.
>
> David Beccue  writes:
>
> > I have a pretty basic question about how qemu works... I have an
> > analysis library (no source) for ARM Cortex M3 processor that I'd like
> > to run on many files.
>
> The analysis library runs on the Cortex-M3?
>
> > My hardware would be very slow doing
> > this. Will QEMU be able to run this faster (assuming a fast PC, ofc)?
>
> There is a fairly big performance penalty for softmmu emulation but
> given the speed of most modern PCs compared to microcontrollers you
> would possibly be able to achieve parity.
>
> >
> > Cheers,
> > David
> >
> > - - -
> > David Beccue
> >
>
> --
> Alex Bennée
>
>


Trouble connecting Windows 10 Host to PowerPC E500 QEMU Instance for Serial Communication

2020-07-07 Thread David Seccareccia
Hi,

I am using QEMU v4.2.0 on Windows 10 and I am trying to connect my PowerPC E500 
emulation to my host via serial connection. When trying to enter characters 
into the QEMU terminal, I see that they are not taken in. The current command I 
use to launch QEMU is shown below:

C:/Program\ Files/qemu/qemu-system-ppc -cpu e500 -d guest_errors,unimp -M 
ppce500 -m 2G,slots=3,maxmem=4G -s -monitor stdio -qmp 
tcp:127.0.0.1:5,server,nowait \
  -bios Executables/mRTOSKernel.elf \
  -device loader,file=Executables/Partition1.elf,addr=0x95000,force-raw=on \
  -device loader,file=Executables/Partition2.elf,addr=0x12B000,force-raw=on \
  -device 
loader,file=Executables/PartitionTestPrinter.elf,addr=0x1BB000,force-raw=on \
  -device 
loader,file=Executables/Partition1_Config.bin,addr=0x85000,force-raw=on \
  -device 
loader,file=Executables/Partition2_Config.bin,addr=0x11B000,force-raw=on \
  -device 
loader,file=Executables/PartitionTestPrinter_Config.bin,addr=0x1AB000,force-raw=on
 \
  -device loader,file=Executables/moduleConfig.bin,addr=0x55000,force-raw=on

I have tried the following flags to help resolve the issue:

1.   Using the "-serial stdio" flag

a.   According to the QEMU documentation, this does not work for Windows 
hosts

2.   Using the "-serial COMn" flag, where n was chosen to be 10

a.   The error "-serial COM10: could not connect serial device to character 
backend 'COM10'" is generated

3.   Opening a TCP socket to connect to the emulation. I was able to 
connect with TeraTerm however the characters were still not being registered

a.   Using -chardev 
socket,id=testComm,host=127.0.0.1,port=,server,telnet -serial chardev: 
testComm -nographic

b.  Only using the -serial option to define a TCP port (i.e -serial 
tcp:127.0.0.1:,server)

4.   Given the issues with Windows, I tried launching QEMU in MinGW64

a.   Using same commands as in 3.a and 3.b, however characters were still 
not being received by the test

b.  Mapping the MinGW serial ports to the COM ports

   i.  Using 
-serial /dev/ttyS9 and opening TeraTerm to COM10 (ttyS9 in MINGW64 maps to 
COM10)

1.   C:\Program Files\qemu\qemu-system-ppc.exe: -serial /??/COM10: could 
not connect serial device to character backend '/??/COM10'

 ii.  Using 
-chardev tty,id=ttyS9,path=/dev/ttyS9 -serial ttyS9

1.   QEMU through error "C:\Program Files\qemu\qemu-system-ppc.exe: 
-chardev tty,id=ttyS9,path=/??/COM10: Failed CreateFile (123)"

Is there a configuration step I am missing that is causing the keyboard input 
to be ignored?

Thanks,
David Seccareccia


Qemu aarch64 emulation question

2021-09-05 Thread David Faller
Dear all,

 

I have a little question about aarch64 virtualization on x86_host.

At home I build up a virtual machine running aarch64 debian.
My system setup is an AMD Ryzen 5900x.
On this System aarch64 emulation runs really smooth and fine without high cpu 
usage.


So after I finished my project and moved this VM to our cluster in our company, 
the the vm is running big slowly and has a high cpu usage.
The servers there uses 2  CPU E5-2683 v4.

 

So my big question here is, is there a supported list of cpus where I could 
determinate if an virtual aarch64 vm will run smoothly ?

 

Or a list of cpu feature flags when we would like to emulate aarch64?

 

best Regards and many thanks,

 

David Faller



Qemu aarch64 vm supported x86 cpus?

2021-09-05 Thread David Faller
Dear all,
 
I have a little question about aarch64 virtualization on x86_host.

At home I build up a virtual machine running aarch64 debian.
My system setup is an AMD Ryzen 5900x.
On this System aarch64 emulation runs really smooth and fine without high cpu 
usage.

So after I finished my project and moved this VM to our cluster in our company, 
the the vm is running big slowly and has a high cpu usage.
The servers there uses 2  CPU E5-2683 v4.
 
So my big question here is, is there a supported list of cpus where I could 
determinate if an virtual aarch64 vm will run smoothly ?
 
Or a list of cpu feature flags when we would like to emulate aarch64?
 
best Regards and many thanks,
 
David Faller

Guest Ubuntu 18.04 fails to boot with -serial mon:stdio, cannot find ttyS0.

2021-11-08 Thread David Fernandez
Hi there,

I am running qemu-system-x86_64 on aarch64 running Ubuntu 18.04 as both 
guest and host.

I couldn't get the stock qemu-system-x86_04 to boot correctly, as it was 
an old version 2.11.1, I decided to recompile from sources to see if 
that would fix the problem, but the problem still persists, using both 
top of master and stable-2.12 (currently on that).[ TIME ] Timed out 
waiting for device dev-ttyS0.device.

The problem does not happen when using qemu-system-x86_64 on my Fedora 
desktop as host, so I wonder if I need something in my build options or 
if I need to rebuild my kernel with some added kernel configuration 
options...

Hopefully, some experts around here can help me with that if it is a 
known thing (I google around but other than mentioning that 2.11 is too 
old, could not find any clear reason about this problem).

Let me know if any more information is needed, I did not want to flood 
the list with lots of logs straight away, but a few bit are:

$ uname -a
Linux vpm-devkit 4.9.201-tegra #1 SMP PREEMPT Fri Jul 2 15:24:18 BST 
2021 aarch64 aarch64 aarch64 GNU/Linux

$ cat /proc/cpuinfo
processor   : 0
model name  : ARMv8 Processor rev 0 (v8l)
BogoMIPS    : 62.50
Features    : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics 
fphp asimdhp
CPU implementer : 0x4e
CPU architecture: 8
CPU variant : 0x0
CPU part    : 0x004
CPU revision    : 0
MTS version : 51035886

... 8 cpus ...

My build options (I did not do cross-compiling, as I was a bit unsure 
about pkg-config/glib-2.0 for build and host/target, so I compiled 
natively on the host machine, used a separate build folder):

../configure \
   --target-list=x86_64-softmmu \
   --enable-plugins \
   --enable-attr \
   --enable-auth-pam \
   --enable-cap-ng \
   --enable-curl \
   --enable-gnutls \
   --enable-kvm \ <== does not seem to ba available as an accelerator
   --enable-libnfs \
   --enable-libudev \
   --enable-libusb \
   --enable-libxml2 \
   --enable-linux-aio \
   --enable-nettle \
   --enable-seccomp \
   --enable-snappy \
   --enable-spice \
   --enable-spice-protocol \
   --enable-usb-redir \
   --enable-vde \
   --enable-virtfs \
   --enable-virtiofsd \
   --enable-xkbcommon \
   --enable-pie \
   --enable-modules \
   --enable-membarrier \
   --enable-lto \
   --enable-tools \
   --enable-vvfat

Installed as per the defaults in /usr/local (the distro already have 
that in the path before the standard distro folders, so all runs as 
expected):

$ which qemu-system-x86_64
/usr/local/bin/qemu-system-x86_64

Some things like the following could not be used due to current kernel 
or ubuntu packages available (perhaps I need to compile fuse from sources?):

- --enable-libpmem (absent package, couldn't find the right one)
- --enable-libssh (0.8.0 but >= 0.8.7 for libssh-4-dev)
- --enable-fuse --enable-fuse-lseek (fuse2 available but fuse3 needed)
- --enable-netmap (not in current kernel kernel, the required header 
only exists for newer kernels)

I run it like this:

qemu-system-x86_64 \
   -boot order=dc,menu=on \
   -cdrom ubuntu-18.04.6-live-server-amd64.iso \
   -nographic \
   -serial mon:stdio \
   -kernel ufm/vmlinuz \
   -initrd ufm/initrd \
   -append 'boot=casper console=ttyS0 ---' \
   -m 16384 \
   -drive  file=ufm/ufm.fd0,format=raw,if=floppy \ <= empty image to 
avoid ubuntu complaining about fd0.
   -drive  file=ufm/ufm.img,format=raw,if=ide \
   -netdev bridge,br=virbr0,id=net0 \
   -device virtio-net-pci,netdev=net0,id=nic1 \
   -device usb-ehci,id=ehci

The vmlinuz & initrd come from the ubuntu iso in the casper folder (if I 
remember correectly), the append uses what the grub configuration had 
for the normal default kernel in the iso.

The virtual bridge works as expected with the right allow line in 
/usr/local/etc/qemu/bridge.conf and setting the qemu-bridge-helper u+s 
(plus a few extra packages).

Anything else is as per the Ubunt u 18.04.5 (LTS) repos used by the host 
(I did not upgraded the packages, other than the packages needed to get 
the bridge working and the dev packages to compile the qemu on the aarch64).

Regards




Re: Guest Ubuntu 18.04 fails to boot with -serial mon:stdio, cannot find ttyS0.

2021-11-08 Thread David Fernandez
Hi Peter,

Answers in line.

On 08/11/2021 19:59, Peter Maydell wrote:
> On Mon, 8 Nov 2021 at 18:05, David Fernandez  wrote:
>> I am running qemu-system-x86_64 on aarch64 running Ubuntu 18.04 as both
>> guest and host.
>>
>> I couldn't get the stock qemu-system-x86_04 to boot correctly, as it was
>> an old version 2.11.1, I decided to recompile from sources to see if
>> that would fix the problem, but the problem still persists, using both
>> top of master and stable-2.12 (currently on that).
>>
>> [ TIME ] Timed out waiting for device dev-ttyS0.device.
> Is there any more error message ? How long does the guest wait on
> this step before it times out ?
The guest waits at the end forever... probably because it tries to use the
normal console instead and that does not get displayed with my options.

These are all the services that fail:

[ TIME ] Timed out waiting for device dev-ttyS0.device.
[DEPEND] Dependency failed for Serial Getty on ttyS0.
...
[FAILED] Failed to start Dispatcher daemon for systemd-networkd. <== 
network does start fine though.
See 'systemctl status networkd-dispatcher.service' for details.
...
[FAILED] Failed to start Wait until snapd is fully seeded. <== snapd 
runs fine though.
See 'systemctl status snapd.seeded.service' for details.
...
[FAILED] Failed to start Holds Snappy daemon refresh.
See 'systemctl status snapd.hold.service' for details.
[  OK  ] Started Update UTMP about System Runlevel Changes.
... waits forever ...


>> The problem does not happen when using qemu-system-x86_64 on my Fedora
>> desktop as host, so I wonder if I need something in my build options or
>> if I need to rebuild my kernel with some added kernel configuration
>> options...
> Are you testing with the exact same:
>   * command line
>   * files (guest kernel, initrd, iso, etc)
>   * QEMU version
> on both the aarch64 and x86-64 host ?

Yes.


> Does the x86-64 host still work OK if you run it with KVM turned off
> (ie matching the aarch64 host setup) ?

Have not tried that... is there an easy way/option to run that test? Or 
do I need
to compile from sources in Fedora?


>
>> Hopefully, some experts around here can help me with that if it is a
>> known thing (I google around but other than mentioning that 2.11 is too
>> old, could not find any clear reason about this problem).
> For aarch64 host, I would be a bit dubious about running 2.11 or 2.12 --
> they are both absolutely ancient in QEMU terms.
Is there a specific branch I should use? Could not see more than 2.12 in
git.qemu.org regarding stable branches, but happy to compile and try any 
other.

>
> What are the specs of the host CPU (in particular, how fast is it)?
> If it's too underpowered it's possible it just can't run the guest
> fast enough for it to boot up before the guest's systemd tasks
> time out (though it would have to be pretty bad for this to be
> the problem).
The machine is a Jetson AGX Xavier, uses a "Volta" CPU with 8 cores.
In theory should be powerful enough, but you tell me, nVidia does not 
offer a
lot of information on their systems anyway.

>> --enable-kvm \ <== does not seem to ba available as an accelerator
> That is expected -- KVM can only accelerate guests where the
> host and guest are the same CPU architecture, so it can do
> aarch64-on-aarch64 and x86-on-x86, but not x86-on-aarch64.
Good to learn that... here you are the output of virt-host-validate that I
happened to find about:

$ sudo virt-host-validate
   QEMU: Checking if device /dev/kvm 
exists   : FAIL (Check that CPU and 
firmware supports virtualization and kvm module is loaded)
   QEMU: Checking if device /dev/vhost-net 
exists : WARN (Load the 'vhost_net' module 
to improve performance of virtio networking)
   QEMU: Checking if device /dev/net/tun 
exists   : PASS
   QEMU: Checking for cgroup 'memory' controller 
support  : PASS
   QEMU: Checking for cgroup 'memory' controller 
mount-point  : PASS
   QEMU: Checking for cgroup 'cpu' controller 
support : PASS
   QEMU: Checking for cgroup 'cpu' controller 
mount-point : PASS
   QEMU: Checking for cgroup 'cpuacct' controller 
support : PASS
   QEMU: Checking for cgroup 'cpuacct' controller 
mount-point : PASS
   QEMU: Checking for cgroup 'cpuset' controller 
support  : PASS
   QEMU: Checking for cgroup 'cpuset' controller 
mount-point  : PASS
   QEMU: Checking for cgroup 'devices' controller 
support : PASS

Re: Guest Ubuntu 18.04 fails to boot with -serial mon:stdio, cannot find ttyS0.

2021-11-08 Thread David Fernandez
On 08/11/2021 20:21, David Fernandez wrote:
> Hi Peter,
>
> Answers in line.
>
> On 08/11/2021 19:59, Peter Maydell wrote:
>> On Mon, 8 Nov 2021 at 18:05, David Fernandez 
>>  wrote:
>>> I am running qemu-system-x86_64 on aarch64 running Ubuntu 18.04 as both
>>> guest and host.
>>>
>>> I couldn't get the stock qemu-system-x86_04 to boot correctly, as it 
>>> was
>>> an old version 2.11.1, I decided to recompile from sources to see if
>>> that would fix the problem, but the problem still persists, using both
>>> top of master and stable-2.12 (currently on that).
>>>
>>> [ TIME ] Timed out waiting for device dev-ttyS0.device.
>> Is there any more error message ? How long does the guest wait on
>> this step before it times out ?
> The guest waits at the end forever... probably because it tries to use 
> the
> normal console instead and that does not get displayed with my options.
>
> These are all the services that fail:
>
> [ TIME ] Timed out waiting for device dev-ttyS0.device.
> [DEPEND] Dependency failed for Serial Getty on ttyS0.
> ...
> [FAILED] Failed to start Dispatcher daemon for systemd-networkd. <== 
> network does start fine though.
> See 'systemctl status networkd-dispatcher.service' for details.
> ...
> [FAILED] Failed to start Wait until snapd is fully seeded. <== snapd 
> runs fine though.
> See 'systemctl status snapd.seeded.service' for details.
> ...
> [FAILED] Failed to start Holds Snappy daemon refresh.
> See 'systemctl status snapd.hold.service' for details.
> [  OK  ] Started Update UTMP about System Runlevel Changes.
> ... waits forever ...
>
>
>>> The problem does not happen when using qemu-system-x86_64 on my Fedora
>>> desktop as host, so I wonder if I need something in my build options or
>>> if I need to rebuild my kernel with some added kernel configuration
>>> options...
>> Are you testing with the exact same:
>>   * command line
>>   * files (guest kernel, initrd, iso, etc)
>>   * QEMU version
>> on both the aarch64 and x86-64 host ?
>
> Yes. -- Correction -- The Fedora version is:
> $ qemu-system-x86_64 -version
> QEMU emulator version 5.2.0 (qemu-5.2.0-8.fc34)
> Copyright (c) 2003-2020 Fabrice Bellard and the QEMU Project developers
>
>
>
>> Does the x86-64 host still work OK if you run it with KVM turned off
>> (ie matching the aarch64 host setup) ?
>
> Have not tried that... is there an easy way/option to run that test? 
> Or do I need
> to compile from sources in Fedora?
>
>
>>
>>> Hopefully, some experts around here can help me with that if it is a
>>> known thing (I google around but other than mentioning that 2.11 is too
>>> old, could not find any clear reason about this problem).
>> For aarch64 host, I would be a bit dubious about running 2.11 or 2.12 --
>> they are both absolutely ancient in QEMU terms.
> Is there a specific branch I should use? Could not see more than 2.12 in
> git.qemu.org regarding stable branches, but happy to compile and try 
> any other.
>
>>
>> What are the specs of the host CPU (in particular, how fast is it)?
>> If it's too underpowered it's possible it just can't run the guest
>> fast enough for it to boot up before the guest's systemd tasks
>> time out (though it would have to be pretty bad for this to be
>> the problem).
> The machine is a Jetson AGX Xavier, uses a "Volta" CPU with 8 cores.
> In theory should be powerful enough, but you tell me, nVidia does not 
> offer a
> lot of information on their systems anyway.
>
>>>     --enable-kvm \ <== does not seem to ba available as an accelerator
>> That is expected -- KVM can only accelerate guests where the
>> host and guest are the same CPU architecture, so it can do
>> aarch64-on-aarch64 and x86-on-x86, but not x86-on-aarch64.
> Good to learn that... here you are the output of virt-host-validate 
> that I
> happened to find about:
>
> $ sudo virt-host-validate
>   QEMU: Checking if device /dev/kvm 
> exists   : FAIL (Check that CPU and 
> firmware supports virtualization and kvm module is loaded)
>   QEMU: Checking if device /dev/vhost-net 
> exists : WARN (Load the 'vhost_net' module 
> to improve performance of virtio networking)
>   QEMU: Checking if device /dev/net/tun 
> exists   : PASS
>   QEMU: Checking for cgroup 'memory' controller 
> support  : PASS
>   QEMU: Checking for cgroup '

Re: Guest Ubuntu 18.04 fails to boot with -serial mon:stdio, cannot find ttyS0.

2021-11-08 Thread David Fernandez
On 08/11/2021 20:50, Peter Maydell wrote:
> On Mon, 8 Nov 2021 at 20:22, David Fernandez  wrote:
>> Hi Peter,
>>
>> Answers in line.
>>
>> On 08/11/2021 19:59, Peter Maydell wrote:
>>> On Mon, 8 Nov 2021 at 18:05, David Fernandez  
>>> wrote:
>>>> I am running qemu-system-x86_64 on aarch64 running Ubuntu 18.04 as both
>>>> guest and host.
>>>>
>>>> I couldn't get the stock qemu-system-x86_04 to boot correctly, as it was
>>>> an old version 2.11.1, I decided to recompile from sources to see if
>>>> that would fix the problem, but the problem still persists, using both
>>>> top of master and stable-2.12 (currently on that).
>>>>
>>>> [ TIME ] Timed out waiting for device dev-ttyS0.device.
>>> Is there any more error message ? How long does the guest wait on
>>> this step before it times out ?
>> The guest waits at the end forever... probably because it tries to use the
>> normal console instead and that does not get displayed with my options.
>>
>> These are all the services that fail:
>>
>> [ TIME ] Timed out waiting for device dev-ttyS0.device.
>> [DEPEND] Dependency failed for Serial Getty on ttyS0.
>> ...
>> [FAILED] Failed to start Dispatcher daemon for systemd-networkd. <==
>> network does start fine though.
>> See 'systemctl status networkd-dispatcher.service' for details.
>> ...
>> [FAILED] Failed to start Wait until snapd is fully seeded. <== snapd
>> runs fine though.
>> See 'systemctl status snapd.seeded.service' for details.
>> ...
>> [FAILED] Failed to start Holds Snappy daemon refresh.
>> See 'systemctl status snapd.hold.service' for details.
>> [  OK  ] Started Update UTMP about System Runlevel Changes.
>> ... waits forever ...
> This does sound like a lot of things might be timing out, not just
> the "wait for the serial port" part. OTOH the host CPU is supposed to be
> 2.26GHz so it shouldn't really be having that much trouble (assuming
> you aren't heavily loading the host with other stuff!).
>
> There used to be a bug years back where a bug in a guest udev
> rule meant that the guest would spawn a lot of processes in a way
> that was invisible for running on real hardware but was just enough
> extra load to make the slower emulated setup timeout in various
> ways including that "Timed out waiting for device" error:
> https://bugs.launchpad.net/ubuntu/+source/debian-installer/+bug/1615021
> But I think that should have been fixed by 18.04. You might try
> a 20.04 Ubuntu guest just in case, I guess...

I'll try that and let you know...


>
>>>> The problem does not happen when using qemu-system-x86_64 on my Fedora
>>>> desktop as host, so I wonder if I need something in my build options or
>>>> if I need to rebuild my kernel with some added kernel configuration
>>>> options...
>>> Are you testing with the exact same:
>>>* command line
>>>* files (guest kernel, initrd, iso, etc)
>>>* QEMU version
>>> on both the aarch64 and x86-64 host ?
>> Yes.

Sorry, I missed that the version on fedora is 5.2.0 (re-sent the email 
but the list is slow).


>>
>>
>>> Does the x86-64 host still work OK if you run it with KVM turned off
>>> (ie matching the aarch64 host setup) ?
>> Have not tried that... is there an easy way/option to run that test? Or
>> do I need
>> to compile from sources in Fedora?
> As long as your QEMU commandline doesn't have -enable-kvm or any
> other kvm-related option in it it should default to emulation.

I believe I used to run without it... I'll recheck and confirm.


>
>>>> Hopefully, some experts around here can help me with that if it is a
>>>> known thing (I google around but other than mentioning that 2.11 is too
>>>> old, could not find any clear reason about this problem).
>>> For aarch64 host, I would be a bit dubious about running 2.11 or 2.12 --
>>> they are both absolutely ancient in QEMU terms.
>> Is there a specific branch I should use? Could not see more than 2.12 in
>> git.qemu.org regarding stable branches, but happy to compile and try any
>> other.
> We switched some time ago to using tags rather than branches;
> you could use the v6.1.0 tag for the most recent release, or
> master for bleeding-edge.

As noted above, I did not realize straight away that the fedora version 
is 5.2.0...

I'll try 5.2.0 first, then 6.1.0 (I tried master but the problem was 
still there).

Then I might do the newer guest version to see what happens.


> -- PMM




Re: Guest Ubuntu 18.04 fails to boot with -serial mon:stdio, cannot find ttyS0.

2021-11-08 Thread David Fernandez
On 08/11/2021 20:57, David Fernandez wrote:
> On 08/11/2021 20:50, Peter Maydell wrote:
>> On Mon, 8 Nov 2021 at 20:22, David Fernandez 
>>  wrote: Does the x86-64 host still work OK 
>> if you run it with KVM turned off
>> (ie matching the aarch64 host setup) ?
>>> Have not tried that... is there an easy way/option to run that test? Or
>>> do I need
>>> to compile from sources in Fedora?
>> As long as your QEMU commandline doesn't have -enable-kvm or any
>> other kvm-related option in it it should default to emulation.
>
> I believe I used to run without it... I'll recheck and confirm.

So yes, I run without -enable-kvm in the command line:

 From my first email:

I run it like this:

qemu-system-x86_64 \
   -boot order=dc,menu=on \
   -cdrom ubuntu-18.04.6-live-server-amd64.iso \
   -nographic \
   -serial mon:stdio \
   -kernel ufm/vmlinuz \
   -initrd ufm/initrd \
   -append 'boot=casper console=ttyS0 ---' \
   -m 16384 \
   -drive  file=ufm/ufm.fd0,format=raw,if=floppy \ <= empty image to 
avoid ubuntu complaining about fd0.
   -drive  file=ufm/ufm.img,format=raw,if=ide \
   -netdev bridge,br=virbr0,id=net0 \
   -device virtio-net-pci,netdev=net0,id=nic1 \
   -device usb-ehci,id=ehci


>>>>> Hopefully, some experts around here can help me with that if it is a
>>>>> known thing (I google around but other than mentioning that 2.11 
>>>>> is too
>>>>> old, could not find any clear reason about this problem).
>>>> For aarch64 host, I would be a bit dubious about running 2.11 or 
>>>> 2.12 --
>>>> they are both absolutely ancient in QEMU terms.
>>> Is there a specific branch I should use? Could not see more than 
>>> 2.12 in
>>> git.qemu.org regarding stable branches, but happy to compile and try 
>>> any
>>> other.
>> We switched some time ago to using tags rather than branches;
>> you could use the v6.1.0 tag for the most recent release, or
>> master for bleeding-edge.
>
> As noted above, I did not realize straight away that the fedora 
> version is 5.2.0...
>
> I'll try 5.2.0 first, then 6.1.0 (I tried master but the problem was 
> still there).
>
> Then I might do the newer guest version to see what happens.

Right, checked out v5.2.0 tag, recompiled (no --enable-spice-protocol, 
only --enable-spice, and no --enable-lto), and the problem is still there

Hopefully doing a git checkout -f v5.2.0 is enough (not sure if I need 
some other option to force some submodule change... I beleieve the 
configure or make do that). The qemu-system-x84_64 -version gives 5.2.0, 
as in the host, but it still does not find the ttyS0 device.

Will try now 6.1.0 then download the Ubunto 20 iso (and extract the 
vmlinuz and initrd) and see if that changes anything.

>
>
>> -- PMM




Re: Guest Ubuntu 18.04 fails to boot with -serial mon:stdio, cannot find ttyS0.

2021-11-08 Thread David Fernandez
On 08/11/2021 21:19, David Fernandez wrote:
> On 08/11/2021 20:57, David Fernandez wrote:
>> On 08/11/2021 20:50, Peter Maydell wrote:
>>> On Mon, 8 Nov 2021 at 20:22, David Fernandez 
>>>  wrote:
>>>> Is there a specific branch I should use? Could not see more than 
>>>> 2.12 in
>>>> git.qemu.org regarding stable branches, but happy to compile and 
>>>> try any
>>>> other.
>>> We switched some time ago to using tags rather than branches;
>>> you could use the v6.1.0 tag for the most recent release, or
>>> master for bleeding-edge.
>>
>> As noted above, I did not realize straight away that the fedora 
>> version is 5.2.0...
>>
>> I'll try 5.2.0 first, then 6.1.0 (I tried master but the problem was 
>> still there).
>>
>> Then I might do the newer guest version to see what happens.
>
> Right, checked out v5.2.0 tag, recompiled (no --enable-spice-protocol, 
> only --enable-spice, and no --enable-lto), and the problem is still there
>
> Hopefully doing a git checkout -f v5.2.0 is enough (not sure if I need 
> some other option to force some submodule change... I beleieve the 
> configure or make do that). The qemu-system-x84_64 -version gives 
> 5.2.0, as in the host, but it still does not find the ttyS0 device.
>
> Will try now 6.1.0 then download the Ubunto 20 iso (and extract the 
> vmlinuz and initrd) and see if that changes anything.
>
Tried v6.1.0 from a clean qemu repo, just to be sure that make checks 
out the right submodules, and still the same, so will try Ububntu 20... 
this probably take until tomorrow to get results... let you know.
>>
>>
>>> -- PMM
>
>



Re: Guest Ubuntu 18.04 fails to boot with -serial mon:stdio, cannot find ttyS0.

2021-11-08 Thread David Fernandez
On 08/11/2021 22:00, David Fernandez wrote:
> On 08/11/2021 21:19, David Fernandez wrote:
>> On 08/11/2021 20:57, David Fernandez wrote:
>>> On 08/11/2021 20:50, Peter Maydell wrote:
>>>> On Mon, 8 Nov 2021 at 20:22, David Fernandez 
>>>>  wrote:
>>>>> Is there a specific branch I should use? Could not see more than 
>>>>> 2.12 in
>>>>> git.qemu.org regarding stable branches, but happy to compile and 
>>>>> try any
>>>>> other.
>>>> We switched some time ago to using tags rather than branches;
>>>> you could use the v6.1.0 tag for the most recent release, or
>>>> master for bleeding-edge.
>>>
>>> As noted above, I did not realize straight away that the fedora 
>>> version is 5.2.0...
>>>
>>> I'll try 5.2.0 first, then 6.1.0 (I tried master but the problem was 
>>> still there).
>>>
>>> Then I might do the newer guest version to see what happens.
>>
>> Right, checked out v5.2.0 tag, recompiled (no 
>> --enable-spice-protocol, only --enable-spice, and no --enable-lto), 
>> and the problem is still there
>>
>> Hopefully doing a git checkout -f v5.2.0 is enough (not sure if I 
>> need some other option to force some submodule change... I beleieve 
>> the configure or make do that). The qemu-system-x84_64 -version gives 
>> 5.2.0, as in the host, but it still does not find the ttyS0 device.
>>
>> Will try now 6.1.0 then download the Ubunto 20 iso (and extract the 
>> vmlinuz and initrd) and see if that changes anything.
>>
> Tried v6.1.0 from a clean qemu repo, just to be sure that make checks 
> out the right submodules, and still the same, so will try Ububntu 
> 20... this probably take until tomorrow to get results... let you know.

Right, seems that the torrent download for the iso was really quick at 
this time, so I tested qemu 6.1.0 with Ubuntu 20.04.3 in both my intel 
desktop and the aarch64 Jetson, and it works fine in my desktop and 
fails in the same way in the Jetson.

Any ideas?

>>>> -- PMM




Re: Guest Ubuntu 18.04 fails to boot with -serial mon:stdio, cannot find ttyS0.

2021-11-10 Thread David Fernandez
As I have not hear anything yet, I thought I would summarize the current 
status
for this problem. Please, let me know if any other tests or information are
needed.

I am running qemu-system-x86_64 v5.2.0 (also tried v6.1.0 and top of 
master) on:
    - aarch64 (Jetson AGX Xavier) with Ubuntu 18.04.5 as a host 
(compiled from
  git sources as distro version for it was 2.11, which is too old), 
and on
    - x86_64 (my laptop) with Fedora 34 as a host (here the 
qemu-system-x86_64
  distro version is 5.2.0).

Running Ubuntu 18.04.6 server install cdrom (also tried Ubuntu 20.04.3) 
as the
guest.

The following services fail on the Jetson, but not on the laptop. The 
first one
is the ttyS0 console, which seems the most important thing as it is provided
directly by the virtual emulation (-serial mon:stdio):

[ TIME ] Timed out waiting for device dev-ttyS0.device.
[DEPEND] Dependency failed for Serial Getty on ttyS0.
...
[FAILED] Failed to start Dispatcher daemon for systemd-networkd. <== 
network does start fine though.
See 'systemctl status networkd-dispatcher.service' for details.
...
[FAILED] Failed to start Wait until snapd is fully seeded. <== snapd 
runs fine though.
See 'systemctl status snapd.seeded.service' for details.
...
[FAILED] Failed to start Holds Snappy daemon refresh.
See 'systemctl status snapd.hold.service' for details.
[  OK  ] Started Update UTMP about System Runlevel Changes.
... waits forever ...

Note that the Jetson has 8 cores running at 2.25GHz, and this tests is 
run just
after boot with no other user applications launched.

I wonder if I need something in my build options or if I need to rebuild my
kernel with some added kernel configuration options...

Hopefully, some experts around here can help me with that if it is a 
known thing
(I google around but other than mentioning that 2.11 is too old, could 
not find
any clear reason about this problem).

My build options for the Jetson (I did not do cross-compiling, as I was 
a bit
unsure about pkg-config/glib-2.0 for build and host/target, so I compiled
natively on the Jetson host machine, using a separate build folder):

../configure \
   --target-list=x86_64-softmmu \
   --enable-plugins \
   --enable-attr \
   --enable-auth-pam \
   --enable-cap-ng \
   --enable-curl \
   --enable-gnutls \
   --enable-kvm \ <== not available as an accelerator for Ubuntu host on 
Jetson
   --enable-libnfs \
   --enable-libudev \
   --enable-libusb \
   --enable-libxml2 \
   --enable-linux-aio \
   --enable-nettle \
   --enable-seccomp \
   --enable-snappy \
   --enable-spice \
   --enable-usb-redir \
   --enable-vde \
   --enable-virtfs \
   --enable-virtiofsd \
   --enable-xkbcommon \
   --enable-pie \
   --enable-modules \
   --enable-membarrier \
   --enable-tools \
   --enable-vvfat

Installed as per the default prefix in /usr/local (the distro already 
have that
in the path before the standard distro folders, so all runs as expected):

$ which qemu-system-x86_64
/usr/local/bin/qemu-system-x86_64

Some things like the following could not be used due to current kernel or
ubuntu packages available (perhaps I need to compile fuse from sources?):

    - --enable-libpmem (absent package, couldn't find the right one)
    - --enable-libssh (0.8.0 but >= 0.8.7 for libssh-4-dev)
    - --enable-fuse --enable-fuse-lseek (fuse2 available but fuse3 needed)
    - --enable-netmap (not in current kernel kernel, the required header 
only
  exists for newer kernels)

I run it with the following command line on both Jetson and laptop:

qemu-system-x86_64 \
   -boot order=dc,menu=on \
   -cdrom ubuntu-18.04.6-live-server-amd64.iso \
   -nographic \
   -serial mon:stdio \
   -kernel ufm/vmlinuz \
   -initrd ufm/initrd \
   -append 'boot=casper console=ttyS0 ---' \
   -m 16384 \
   -drive  file=ufm/ufm.fd0,format=raw,if=floppy \ <= empty image to 
avoid ubuntu complaining about fd0.
   -drive  file=ufm/ufm.img,format=raw,if=ide \
   -netdev bridge,br=virbr0,id=net0 \
   -device virtio-net-pci,netdev=net0,id=nic1 \
   -device usb-ehci,id=ehci

The vmlinuz & initrd come from the ubuntu iso in the casper folder, the 
append
uses what the grub configuration had for the normal default kernel in 
the iso.

The virtual bridge works as expected with the right allow line in
/usr/local/etc/qemu/bridge.conf and setting the qemu-bridge-helper u+s 
(plus a
few extra packages).

Anything else is as per the Ubunt u 18.04.5 (LTS) repos used by the host 
(I did
not upgraded the packages, other than the packages needed to get the bridge
working and the dev packages to compile the qemu on the aarch64).

In the Jetson machine running Ubuntu 18.04.5 I get:

$ uname -a
Linux vpm-devkit 4.9.201-tegra #1 SMP PREEMPT Fri Jul 2 15:24:18 BST 
2021 aarch64 aarch64 aarch64 GNU/Linux

$ cat /proc/cpuinfo
processor   : 0
model name  : ARMv8 Processor rev 0 (v8l)
BogoMIPS    : 62.50
Features    : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics 
fphp asimdhp
CPU imple

Re: Guest Ubuntu 18.04 fails to boot with -serial mon:stdio, cannot find ttyS0.

2021-11-11 Thread David Fernandez
On 11/11/2021 11:23, Peter Maydell wrote:
> [No suele recibir correo electrónico de peter.mayd...@linaro.org. Obtenga 
> información acerca de por qué esto es importante en 
> http://aka.ms/LearnAboutSenderIdentification.]
>
> On Wed, 10 Nov 2021 at 18:56, David Fernandez  wrote:
>> As I have not hear anything yet, I thought I would summarize the current
>> status
>> for this problem. Please, let me know if any other tests or information are
>> needed.
>>
>> I am running qemu-system-x86_64 v5.2.0 (also tried v6.1.0 and top of
>> master) on:
>>  - aarch64 (Jetson AGX Xavier) with Ubuntu 18.04.5 as a host
>> (compiled from
>>git sources as distro version for it was 2.11, which is too old),
>> and on
>>  - x86_64 (my laptop) with Fedora 34 as a host (here the
>> qemu-system-x86_64
>>distro version is 5.2.0).
>>
>> Running Ubuntu 18.04.6 server install cdrom (also tried Ubuntu 20.04.3)
>> as the
>> guest.
>>
>> The following services fail on the Jetson, but not on the laptop. The
>> first one
>> is the ttyS0 console, which seems the most important thing as it is provided
>> directly by the virtual emulation (-serial mon:stdio):
>>
>> [ TIME ] Timed out waiting for device dev-ttyS0.device.
> I'm pretty sure this isn't actually a problem with the emulation
> of the serial device (after all you are seeing all these messages
> so far on the serial console, right?).

Right, may be it is not, I do not know what the problem is.

>   The problem is that the
> udev machinery that creates nodes in /dev is being too slow
> (or possibly is failing for some other reason, but given all the
> other timeouts I'm guessing "everything is too slow") and so
> the systemd unit that is waiting for /dev/ttyS0 to be created
> times out.

What is a bit puzzling is that this is supposed to all run in an 
emulated machine having its own simulated time, so yes things are slow, 
but everything should happen as expected, just slowly.

I guess I will compile from sources on Fedora and see if I get the same 
problem, as it is a bit hard to believe that the only way to run qemu is 
to have a high end machine dedicated just to run an install cd.

If the qemu compiled from sources on Fedora fails, then we should 
conclude that there is something about the distro supplied qemu that is 
not properly done when compiling from sources as I do... may be my fault 
or a bug, but

As mentioned, let me know if you know of something that could fix this.

Regards

>
> -- PMM




Re: Guest Ubuntu 18.04 fails to boot with -serial mon:stdio, cannot find ttyS0.

2021-11-11 Thread David Fernandez
On 11/11/2021 13:42, Peter Maydell wrote:
> [No suele recibir correo electrónico de peter.mayd...@linaro.org. Obtenga 
> información acerca de por qué esto es importante en 
> http://aka.ms/LearnAboutSenderIdentification.]
>
> On Thu, 11 Nov 2021 at 12:45, David Fernandez  wrote:
>> On 11/11/2021 11:23, Peter Maydell wrote:
>>>The problem is that the
>>> udev machinery that creates nodes in /dev is being too slow
>>> (or possibly is failing for some other reason, but given all the
>>> other timeouts I'm guessing "everything is too slow") and so
>>> the systemd unit that is waiting for /dev/ttyS0 to be created
>>> times out.
>> What is a bit puzzling is that this is supposed to all run in an
>> emulated machine having its own simulated time, so yes things are slow,
>> but everything should happen as expected, just slowly.
> That's not the way QEMU's emulation of time works. In non-icount
> mode, which is the default, wall clock time in the VM follows
> wall clock time in the outside world. The guest just sees
> itself as running on a rather slow CPU (and one with some odd
> performance characteristics about what is slow compared to
> what other things).
I see... I wonder if a redhat system is likely to run better in such a
situation, or if any other debian/ubuntu distro is known to run better when
host performance might have an impact.

>> I guess I will compile from sources on Fedora and see if I get the same
>> problem, as it is a bit hard to believe that the only way to run qemu is
>> to have a high end machine dedicated just to run an install cd.
> There's probably something odd going on, it's just not clear
> what and trying to diagnose it is going to be really hard.
> It is the case that if the host system is underpowered then
> it's not going to be able to run complicated guests in
> an acceptably performant way, but that ought to apply more
> to situations like "I want to emulate a Windows guest on my
> first-generation raspberry pi".
>
> What does 'file' say about the QEMU binary you're running
> on the aarch64 system? (This is a check to eliminate an
> almost-certainly-not-the-problem theory.)
sen@vpm-devkit:~$ which qemu-system-x86_64
/usr/local/bin/qemu-system-x86_64
sen@vpm-devkit:~$ file /usr/local/bin/qemu-system-x86_64
/usr/local/bin/qemu-system-x86_64: ELF 64-bit LSB shared object, ARM 
aarch64, version 1 (GNU/Linux), dynamically linked, interpreter 
/lib/ld-linux-aarch64.so.1, for GNU/Linux 3.7.0, 
BuildID[sha1]=e05b26921bd35d96b6c749d23d5bfa5e6e43ab4c, stripped

> -- PMM




Re: Guest Ubuntu 18.04 fails to boot with -serial mon:stdio, cannot find ttyS0.

2021-11-11 Thread David Fernandez
On 11/11/2021 13:42, Peter Maydell wrote:
> [No suele recibir correo electrónico de peter.mayd...@linaro.org. Obtenga 
> información acerca de por qué esto es importante en 
> http://aka.ms/LearnAboutSenderIdentification.]
>
> On Thu, 11 Nov 2021 at 12:45, David Fernandez  wrote:
>> On 11/11/2021 11:23, Peter Maydell wrote:
>>>The problem is that the
>>> udev machinery that creates nodes in /dev is being too slow
>>> (or possibly is failing for some other reason, but given all the
>>> other timeouts I'm guessing "everything is too slow") and so
>>> the systemd unit that is waiting for /dev/ttyS0 to be created
>>> times out.
>> What is a bit puzzling is that this is supposed to all run in an
>> emulated machine having its own simulated time, so yes things are slow,
>> but everything should happen as expected, just slowly.
> That's not the way QEMU's emulation of time works. In non-icount
> mode, which is the default, wall clock time in the VM follows
> wall clock time in the outside world. The guest just sees
> itself as running on a rather slow CPU (and one with some odd
> performance characteristics about what is slow compared to
> what other things).
I see... I wonder if a redhat system is likely to run better in such a
situation, or if any other debian/ubuntu distro is known to run better when
host performance might have an impact.

>> I guess I will compile from sources on Fedora and see if I get the same
>> problem, as it is a bit hard to believe that the only way to run qemu is
>> to have a high end machine dedicated just to run an install cd.
> There's probably something odd going on, it's just not clear
> what and trying to diagnose it is going to be really hard.
> It is the case that if the host system is underpowered then
> it's not going to be able to run complicated guests in
> an acceptably performant way, but that ought to apply more
> to situations like "I want to emulate a Windows guest on my
> first-generation raspberry pi".
>
> What does 'file' say about the QEMU binary you're running
> on the aarch64 system? (This is a check to eliminate an
> almost-certainly-not-the-problem theory.)
sen@vpm-devkit:~$ which qemu-system-x86_64
/usr/local/bin/qemu-system-x86_64
sen@vpm-devkit:~$ file /usr/local/bin/qemu-system-x86_64
/usr/local/bin/qemu-system-x86_64: ELF 64-bit LSB shared object, ARM 
aarch64, version 1 (GNU/Linux), dynamically linked, interpreter 
/lib/ld-linux-aarch64.so.1, for GNU/Linux 3.7.0, 
BuildID[sha1]=e05b26921bd35d96b6c749d23d5bfa5e6e43ab4c, stripped

> -- PMM




Re: Guest Ubuntu 18.04 fails to boot with -serial mon:stdio, cannot find ttyS0.

2021-11-12 Thread David Fernandez
On 12/11/2021 10:27, Thomas Huth wrote:
> [No suele recibir correo electrónico de th...@redhat.com. Obtenga 
> información acerca de por qué esto es importante en 
> http://aka.ms/LearnAboutSenderIdentification.]
>
> On 11/11/2021 17.10, Peter Maydell wrote:
>> On Mon, 8 Nov 2021 at 18:05, David Fernandez 
>>  wrote:
>>>
>>> ../configure \
>>
>>>     --enable-lto \
>>
>> Does disabling LTO make a difference? That's about the only
>> thing in the configure options that stands out as maybe
>> making a difference.
>
> Please don't use LTO on non-x86 hosts, there are some known problems with
> these optimizations (though the symptoms look rather differently to 
> what has
> been described here).
>
>  Thomas
>
Sure, as mentioned in my previous response, it was only available for 
the branched versions, but not available anymore for v5.2.0 and v6.1.0 
or top of master, which are the latest ones I have been testing.

I did see also problems with lto when compiling for arm, so I tend to be 
away from them, but I did try it as I assumed people use it in general.

Regards



Re: Guest Ubuntu 18.04 fails to boot with -serial mon:stdio, cannot find ttyS0.

2021-11-12 Thread David Fernandez
On 11/11/2021 16:10, Peter Maydell wrote:
> On Mon, 8 Nov 2021 at 18:05, David Fernandez  wrote:
>> ../configure \
>> --enable-lto \
> Does disabling LTO make a difference? That's about the only
> thing in the configure options that stands out as maybe
> making a difference.
>
> -- PMM

Hi Peter,

That option is not available anymore in the tagged versions... it was 
only available in the stable-2.12.

I believe I mentioned that in my previous emails... the last email 
sumarizing does not have it anymore, as I updated things according to my 
testing with v5.2.0 and v6.1.0

It soes not seem to make any difference.

Regards



Re: Guest Ubuntu 18.04 fails to boot with -serial mon:stdio, cannot find ttyS0.

2021-11-12 Thread David Fernandez
On 11/11/2021 16:40, Peter Maydell wrote:
> On Mon, 8 Nov 2021 at 18:05, David Fernandez  wrote:
>> I couldn't get the stock qemu-system-x86_04 to boot correctly, as it was
>> an old version 2.11.1, I decided to recompile from sources to see if
>> that would fix the problem, but the problem still persists, using both
>> top of master and stable-2.12 (currently on that).[ TIME ] Timed out
>> waiting for device dev-ttyS0.device.
> FWIW, I tried to repro this on an aarch64 server I have access to,
> and I don't see that error -- the system boots (eventually) to
> the installer UI.
>
> QEMU version built: upstream git, commit 70f872ca916ac45.
>
> Configure options:
> --target-list=x86_64-softmmu --disable-tools --disable-docs
>
> vmlinuz and initrd extracted from the iso with
> isoinfo  -i ubuntu-18.04.6-live-server-amd64.iso -x '/CASPER/INITRD.;1' > 
> initrd
> isoinfo  -i ubuntu-18.04.6-live-server-amd64.iso -x
> '/CASPER/VMLINUZ.;1' > vmlinuz
>
> QEMU command:
> ./build/tgt-x86/qemu-system-x86_64 -boot order=dc,menu=on -cdrom
> ubuntu-18.04.6-live-server-amd64.iso -nographic -serial mon:stdio -m
> 16384 -kernel /tmp/vmlinuz -initrd /tmp/initrd -append 'boot=casper
> console=ttyS0 ---'
>
> (I didn't bother creating any of the devices for this test, so as
> you note there's a bunch of harmless complaints from the kernel
> floppy driver.)
>
> Total time to reach first screen of the text installer: 5m26s
>
> It's maybe also worth mentioning that "emulate x86-64" is, although
> a supported use case, not one that as far as I'm aware anybody is
> paid to work on, so enhancements and bugfixes to it are largely
> done on a volunteer basis. (This is as distinct from, for example,
> "using KVM on either x86-64 or aarch64" and "emulation of aarch64
> on x86-64", which have more people working on them.)
>
> -- PMM

For whatever is worth, I compiled v6.1.0 on my Fedora Intel laptop and 
the issue is not there either, so this seems closely related to aarch64 
and most likely to Ubuntu 18.04.5 as a host... and I am more and more 
afraid that it might have to do with the kernel configuration for the 
Jetson... I can recompile it if needed, so if there is some test that 
needs to be done to find out, I am happy to do.




Re: How to get PID in tcg plugin

2023-03-30 Thread David Smitley
Is there an estimated time frame for when register access will be 
available from a plugin? Is there are branch with this feature that 
someone could try out?


Cheers,
David

On 3/30/23 05:03, Alex Bennée wrote:

syheliel syheliel  writes:


I'd like to use tcg plugin to trace a particular process using qemu system 
mode. But I have no idea how to do
it.

In system mode QEMU doesn't have any visibility of what process id code
might be executing in the guest. If we had register access support you
could use something like the CONTEXTIDR but that is still WIP.






Re: [Qemu-discuss] [Qemu-devel] [dpdk-dev] Will huge page have negative effect on guest vm in qemu enviroment?

2017-06-21 Thread Dr. David Alan Gilbert
* Sam (batmanu...@gmail.com) wrote:
> Thank you~
> 
> 1. We have a compare test on qemu-kvm enviroment with huge page and without
> huge page. Qemu start process is much longer in huge page enviromwnt. And I
> write an email titled with '[DPDK-memory] how qemu waste such long time
> under dpdk huge page envriment?'. I could resend it later.

> 2. Then I have another test on qemu-kvm enviroment with huge page and
> without huge page, which I didn't start ovs-dpdk and vhostuser port in qemu
> start process. And I found Qemu start process is also much longer in huge
> page enviroment.
> 
> So I think huge page enviroment, which grub2.cfg file is specified in
> ‘[DPDK-memory]
> how qemu waste such long time under dpdk huge page envriment?’, will really
> have negative effect on qemu start up process.
> 
> That's why we don't like to use ovs-dpdk. Althrough ovs-dpdk is faster, but
> the start up process of qemu is much longer then normal ovs, and the reason
> is nothing with ovs but huge page. For customers, vm start up time is
> important then network speed.

How are you setting up hugepages?  What values are you putting in the
various /proc or cmdline options and how are you specifying them on
QEMU's commandline.

I think one problem is that with hugepages qemu normally allocates them
all at the start;  I think there are cases where that means moving a lot
of memory about, especially if you lock it to particular NUMA nodes.

> BTW, ovs-dpdk start up process is also longer then normal ovs. But I know
> the reason, it's dpdk EAL init process with forking big continous memory
> and zero this memory. For qemu, I don't know why, as there is no log to
> report this.

I suspect it's the mmaping and madvising of those hugepages - you should
be able to see it with an strace of a qemu startup, or perhaps a
'perf top'  on the host as it's in that pause.

I'm told that hugepages are supposed to be especially useful with IOMMU
performance for cards passed through to the guest, so it might still
be worth doing.

Dave

> 2017-06-21 14:15 GMT+08:00 Pavel Shirshov :
> 
> > Hi Sam,
> >
> > Below I'm saying about KVM. I don't have experience with vbox and others.
> > 1. I'd suggest don't use dpdk inside of VM if you want to see best
> > perfomance on the box.
> > 2. huge pages enabled globally will not have any bad effect to guest
> > OS. Except you have to enable huge pages inside of VM and provide real
> > huge page for VM's huge pages from the host system. Otherwise dpdk
> > will use "hugepages" inside of VM, but this "huge pages" will not real
> > ones. They will be constructed from normal pages outside. Also when
> > you enable huge pages OS will reserve them from start and your OS will
> > not able use them for other things. Also you can't swap out huge
> > pages, KSM will not work for them and so on.
> > 3. You can enable huge pages just for one numa node. It's impossible
> > to enable them just for one core. Usually you reserve some memory for
> > hugepages when the system starts and you can't use this memory in
> > normal applications unless normal application knows how to use them.
> >
> > Also why it didn't work inside of the docker?
> >
> >
> > On Tue, Jun 20, 2017 at 8:35 PM, Sam  wrote:
> > > BTW, we also think about use ovs-dpdk in docker enviroment, but test
> > result
> > > said it's not good idea, we don't know why.
> > >
> > > 2017-06-21 11:32 GMT+08:00 Sam :
> > >
> > >> Hi all,
> > >>
> > >> We plan to use DPDK on HP host machine with several core and big memory.
> > >> We plan to use qemu-kvm enviroment. The host will carry 4 or more guest
> > vm
> > >> and 1 ovs.
> > >>
> > >> Ovs-dpdk is much faster then normal ovs, but to use ovs-dpdk, we have to
> > >> enable huge page globally.
> > >>
> > >> My question is, will huge page enabled globally have negative effect on
> > >> guest vm's memory orperate or something? If it is, how to prevent this,
> > or
> > >> could I enable huge page on some core or enable huge page for a part of
> > >> memory?
> > >>
> > >> Thank you~
> > >>
> >
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-discuss] [edk2] How to handle pflash backed OVMF FW upgrade and live migration best?

2018-03-02 Thread Dr. David Alan Gilbert
* Laszlo Ersek (ler...@redhat.com) wrote:
> CC Dave
> 
> On 03/01/18 12:21, Thomas Lamprecht wrote:
> > Hi,
> > 
> > I'm currently evaluating how to update the firmware (OVMF) code image
> > without impacting a KVM/QEMU VM on live migration. I.e., the FW code lives
> > under /usr/share/OVMF/OVMF_CODE.fd and gets passed to the QEMU command with:
> > 
> > qemu-binary [...] -drive 
> > "if=pflash,unit=0,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd"
> > 
> > Now if the target node has an updated version of OVMF the VM does not really
> > likes that, as from its POV it gets effectively another code image loaded
> > from one moment at the other without any notice.
> 
> This should not cause any issues. On the destination host, the
> destination QEMU instance should load the (different) OVMF_CODE.fd image
> into the pflash chip, at startup. However, the incoming migration stream
> contains, as a RAMBlock, the original OVMF_CODE.fd image. In other
> words, the original firmware image is migrated (in memory, as part of
> the migration stream) too.

Yep.

> (
> 
> BTW, there is very little firmware code in OVMF that actually *executes*
> from pflash -- that's just the SEC module. SEC decompresses the PEI and
> DXE firmware volumes from pflash to RAM, and the rest of the firmware
> runs from normal RAM. This applies to runtime firmware services as well.
> So about the only times when OVMF_CODE.fd (in the pflash chip) and
> migration intersect are:
> - if you migrate while SEC is running from pflash (i.e. the earliest
> part of the boot),
> - if you warm-reboot on the destination host after migration -- in this
> case, the OVMF_CODE.fd binary (that got migrated in the pflash RAMBlock
> from the source host) will again boot from pflash.
> 
> )
> 
> > So my questions is if it would make sense to see this read-only pflash
> > content as "VM state" and send it over during live migration?
> 
> That's what already happens.
> 
> Now, if you have a differently *sized* OVMF_CODE.fd image on the
> destination host, that could be a problem. Avoiding such problems is an
> IT / distro job.

Yeh; padding the binaries to a power-of-2 with a bit of space left is
a good rule of thumb.

> There are some "build aspects" of OVMF that can make two OVMF binaries
> "incompatible" in this sense. Using *some* different build flags it's
> also possible to make (a) an OVMF binary and (b) a varstore file
> originally created for another OVMF binary, incompatible.

Fun.

> > This would
> > make migration way easier. Else we need to save all FW files and track which
> > one the VM is using, so that when starting the migration target VM we pass
> > along the correct pflash drive file. Sending over a pflash drive could maybe
> > only get done when a special flag is set for the pflash drive?
> > 
> > As said I can work around in our management stack, but saving the FW image
> > and tracking which VM uses what version, and that cluster wide, may get
> > quite a headache and we would need to keep all older OVMF binaries around...
> 
> When you deploy new OVMF binaries (packages) to a subset of your
> virtualization hosts, you are responsible for keeping those compatible.
> (They *can* contain code updates, but those updates have to be
> compatible.) If a new OVMF binary is built that is known to be
> incompatible, then it has to be installed under a different pathname
> (either via a separate package; or in the same package, but still under
> a different pathname).
> 
> To give you the simplest example, binaries (and varstores) corresponding
> to FD_SIZE_2MB and FD_SIZE_4MB are incompatible. If a domain is
> originally defined on top of an FD_SIZE_2MB OVMF, then it likely cannot
> be migrated to a host where the same OVMF pathname refers to an
> FD_SIZE_4MB binary. If you have a mixed environment, then you need to
> carry both binaries to all hosts (and if you backport fixes from
> upstream edk2, you need to backport those to both binaries).
> 
> In addition, assuming the domain is powered down for good (the QEMU
> process terminates), and you update the domain XML from the FD_SIZE_2MB
> OVMF binary to the FD_SIZE_4MB binary, you *also* have to
> delete/recreate the domain's variable store file (losing all UEFI
> variables the domain has accumulated until then). This is because the
> FD_SIZE_4MB binary is incompatible with the varstore that was originally
> created for the FD_SIZE_2MB binary (and vice versa).

I assume that gives a clear and obvious error message - right?

Dave

> Thanks
> Laszlo
> 
> > If I'm missing something and there's already an easy way for this I'd be
> > very happy to hear from it.
> > 
> > Besides qemu-discuss I posted it to edk2-devel as there maybe more people
> > are in the QEMU and OVMF user intersection. :)
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-discuss] [Qemu-devel] virtio-console downgrade the virtio-pci-blk performance

2018-10-01 Thread Dr. David Alan Gilbert
* Feng Li (lifeng1...@gmail.com) wrote:
> Hi,
> I found an obvious performance downgrade when virtio-console combined
> with virtio-pci-blk.
> 
> This phenomenon exists in nearly all Qemu versions and all Linux
> (CentOS7, Fedora 28, Ubuntu 18.04) distros.
> 
> This is a disk cmd:
> -drive 
> file=iscsi://127.0.0.1:3260/iqn.2016-02.com.test:system:fl-iscsi/1,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
> -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
> 
> If I add "-device
> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5  ", the virtio
> disk 4k iops (randread/randwrite) would downgrade from 60k to 40k.
> 
> In VM, if I rmmod virtio-console, the performance will back to normal.
> 
> Any idea about this issue?
> 
> I don't know this is a qemu issue or kernel issue.

It sounds odd;  can you provide more details on:
  a) The benchmark you're using.
  b) the host and the guest config (number of cpus etc)
  c) Why are you running it with iscsi back to the same host - why not
 just simplify the test back to a simple file?

Dave

> 
> Thanks in advance.
> -- 
> Thanks and Best Regards,
> Alex
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-discuss] [Qemu-devel] Cross-posted : Odd QXL/KVM performance issue with a Windows 7 Guest

2019-09-06 Thread Dr. David Alan Gilbert
 clflush size  : 64
> cache_alignment   : 64
> address sizes : 43 bits physical, 48 bits virtual
> power management: ts ttp tm hwpstate eff_freq_ro [13] [14]
> 
> qemu configured with : 
> PKG_CONFIG_PATH=/usr/local/libvirt/lib/pkgconfig:/usr/local/libvirt/share/pkgconfig/
> ./configure --target-list=x86_64-softmmu --disable-gtk && make -j6
> test:~# uname -a
> 
> Linux test 5.2.9 #42 SMP Tue Aug 20 16:41:13 AWST 2019 x86_64 GNU/Linux
> 
> test:~# zgrep KVM /proc/config.gz
> CONFIG_HAVE_KVM=y
> CONFIG_HAVE_KVM_IRQCHIP=y
> CONFIG_HAVE_KVM_IRQFD=y
> CONFIG_HAVE_KVM_IRQ_ROUTING=y
> CONFIG_HAVE_KVM_EVENTFD=y
> CONFIG_KVM_MMIO=y
> CONFIG_KVM_ASYNC_PF=y
> CONFIG_HAVE_KVM_MSI=y
> CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
> CONFIG_KVM_VFIO=y
> CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=y
> CONFIG_KVM_COMPAT=y
> CONFIG_HAVE_KVM_IRQ_BYPASS=y
> CONFIG_KVM=m
> # CONFIG_KVM_INTEL is not set
> CONFIG_KVM_AMD=m
> # CONFIG_KVM_MMU_AUDIT is not set
> 
> test:~# qemu --version
> QEMU emulator version 4.1.50 (v4.1.0-714-g90b1e3a)
> Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers
> 
> To be clear, I'm not chasing benchmarks. This is a real issue with trying to
> run CAD on a Windows 7 VM on a Ryzen machine.
> 
> I've tried it on two machines (a 1500x and an 1800x) with the same result.
> The same configuration on the i7 box works perfectly.
> 
> I've tried several versions of SPICE, multiple versions of KVM, kernels back
> as far as 4.0 with KVM enabled, different old and new windows VMs and driver
> versions. The only constant is it is limited on the AMD box if KVM is
> enabled.
> 
> I was leaning towards it being something in the qemu/spice software stack,
> but the fact that without kvm enabled it goes as fast as the CPU will allow
> it would indicate perhaps the fault likes in some relationship with kvm, so
> I'm cross-posting to qemu-devel to see if this jogs someones memory, or if
> there are any other things I can try to attempt to solve this one.
> 
> Regards,
> Brad
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-discuss] [Qemu-devel] Cross-posted : Odd QXL/KVM performance issue with a Windows 7 Guest

2019-09-09 Thread Dr. David Alan Gilbert
* Brad Campbell (lists2...@fnarfbargle.com) wrote:
> 
> On 7/9/19 03:03, Dr. David Alan Gilbert wrote:
> > * Brad Campbell (lists2...@fnarfbargle.com) wrote:
> > > On 2/9/19 6:23 pm, Brad Campbell wrote:
> > > 
> > > > Here is the holdup :
> > > > 
> > > > 11725@1567416625.003504:qxl_ring_command_check 0 native
> > > > 11725@1567416625.102653:qxl_io_write 0 native addr=0 (QXL_IO_NOTIFY_CMD)
> > > > val=0 size=1 async=0
> > > > 
> > > > ~100ms delay prior to each logged QXL_IO_NOTIFY_CMD on the AMD box which
> > > > explains the performance difference. Now I just need to figure out if
> > > > that lies in the guest, the guest QXL driver, QEMU or SPICE and why it
> > > > exhibits on the AMD box and not the i7.
> > > > 
> > > > To get to this point, I recompiled the kernel on the i7 box with both
> > > > AMD and Intel KVM modules. Once that was running I cloned the drive and
> > > > put it in the AMD box, so the OS, software stack and all dependencies
> > > > are identical.
> > > Reacp :
> > > 
> > > I have a machine with a Windows 7 VM which is running on an i7-3770. This
> > > works perfectly.
> > > 
> > > Clone the disk and put it in a new(ish) AMD Ryzen 1500x machine and the
> > > display output using qxl/spice is now limited to ~5-7fps.
> > > 
> > > I originally cloned the entire machine to keep the software versions
> > > identical.
> > > 
> > > To simplify debugging and reproduction I'm now using :
> > > - An identical SPICE version to that on the i7.
> > > - A fresh 64 bit Windows 7 VM.
> > > - The D2D benchmark from Crystalmark 2004R7.
> > > 
> > > The machine is booted with :
> > > 
> > > qemu -enable-kvm \
> > >   -m 8192\
> > >   -rtc base=localtime\
> > >   -vga qxl\
> > >   -device qxl\
> > >   -global qxl-vga.guestdebug=3\
> > >   -global qxl-vga.cmdlog=1\
> > >   -global qxl-vga.vram_size=65536\
> > >   -global qxl.vram_size=65536\
> > >   -global qxl-vga.ram_size=65536\
> > >   -global qxl.ram_size=65536\
> > >   -net nic,model=virtio\
> > >   -net tap,ifname=tap0,script=/etc/qemu-ifup,vhost=on\
> > >   -usbdevice tablet\
> > >   -spice port=5930,disable-ticketing\
> > >   -device virtio-serial\
> > >   -chardev spicevmc,id=vdagent,name=vdagent\
> > >   -device virtserialport,chardev=vdagent,name=com.redhat.spice.0\
> > >   -smp 3,maxcpus=3,cores=3,threads=1,sockets=1\
> > >   -cpu qemu64,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time \
> > -cpu qemu64 is almost always a bad idea;  does -cpu host help ?
> > 
> > Dave
> 
> 
> No. I was using -cpu host. I changed it to qemu64 for testing so I could add 
> & remove -enable-kvm for testing without the machine changing drivers about.

Oh, hmm.
Sorry I don't know too much where to look then; you have any of:
  a) Windows
  b) guest graphics drivers
  c) spice server in qemu
 
and probalby some more.

So I think it's going to be a case of profiling on the two different
systems and see if you can spot anything in particular that stands out.

Dave

> Regards,
> 
> Brad
> 
> -- 
> An expert is a person who has found out by his own painful
> experience all the mistakes that one can make in a very
> narrow field. - Niels Bohr
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: Live migration fails with "Mismatched RAM page size ram-node0 (local) 2097152 != 1526773257204281392"

2021-02-02 Thread Dr. David Alan Gilbert
* Damir Chanyshev (conflict...@gmail.com) wrote:
> Hello,
> Qemu version 5.1 host os Debian 10.7
> Two exactly the same  machines ( except ram size 380G and 1.5T )
> Live migration fails (from host with 380G ram to 1.5T) with errors like this:
> Feb 02 16:26:13 QEMU[12090]: kvm: load of migration failed: Invalid argument
> Feb 02 16:26:13 QEMU[12090]: kvm: error while loading state for
> instance 0x0 of device 'ram'
> Feb 02 16:26:13 QEMU[12090]: kvm: Mismatched RAM page size ram-node0
> (local) 2097152 != 1526773257204281392
> 
> I think it's some overflow issue.

That's a fun error; I've not seen anyone manage to trigger that before.

Could you please post the qemu command line from both the source and the
destination?

My guess here is that the use of huge pages is different on the source
and destination;  when the destination is using huge pages it will read
the page size of the block from the stream and compare it to the page
size it's using - they should match (if postcopy is enabled).

To me it looks like the destination is using 2MB huge pages
(probably explicitly from something like /dev/hugepages)
and maybe the source isn't; the source (because it's not using
hugepages) didn't bother sending the page size, so the destination
then reads some junk off the stream; that junk is probably the name
of the next RAMBlock, and it's probably a PCI device, so that
huge number is hex 15303030303A3030 which is 21 bytes long
and is:
  :00

which looks like the start of a PCI address; maybe for video RAM.

Or in a simple answer; if you've got postcopy enabled, and you're
using hugepages, make sure you use them consistently on source
and destination.

Dave

> -- 
> Thanks,
> Damir Chanyshev
> 
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: romfile resize

2021-02-23 Thread Dr. David Alan Gilbert
* Philippe Mathieu-Daudé (phi...@redhat.com) wrote:
> Cc'ing qemu-devel@
> 
> On 2/23/21 1:45 AM, Jiatong Shen wrote:
> > Hello,
> > 
> >   we are faced with an issue where a changed sized romfile
> > (efi-virtio.rom) fails live migration. Do qemu load this rom from its
> > current host only? If yes, why cannot sync this from source vm?

Hi,
  For migration to work the ROM has to be the same size on the source
and destination.

  The problem is that whne the destination starts up it allocates the
size of the ROM based on the size of the file;  but then the migration
comes along and tries to copy the data from the source machine into that
allocation; and isn't sure what should happen when it doesn't quite fit.

  There is some variation allowed (I think the allocated size gets
rounded up, maybe to the next power of 2); but you still hit problems
wehn the ROM size crosses certain thresholds.

  In the latest qemu, a 'romsize' property was added (see git commit
08b1df8ff463e72b0875538fb991d5393047606c ); that lets you specifiy a
size that's big enough to hold some space for future expansion - e.g.
lets say your ROM is currently 300k, you might specify romsize=512k
and then it doesn't matter what size the actual file is, we'll always
allocate 512k, and as long as the file is less than 512k migration will
work.

  The more manual way to do that, is to arrange for your files to be
padded to a larger boundary so that you leave room for growth.
Some distros have done that for a while.

Dave
  
> > thank you.
> > 
> > -- 
> > 
> > Best Regards,
> > 
> > Jiatong Shen
> 
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: romfile resize

2021-02-23 Thread Dr. David Alan Gilbert
* Jiatong Shen (yshxxsjt...@gmail.com) wrote:
> Hi,
> 
>   Thank you very much for the answer. so if romfile on destination got a
> larger size than source, why romfile check still does not
> pass? because dest got enough space to hold romfile.

Right.

Dave

> thank you.
> 
> Jiatong Shen
> 
> On Tue, Feb 23, 2021 at 5:46 PM Dr. David Alan Gilbert 
> wrote:
> 
> > * Philippe Mathieu-Daudé (phi...@redhat.com) wrote:
> > > Cc'ing qemu-devel@
> > >
> > > On 2/23/21 1:45 AM, Jiatong Shen wrote:
> > > > Hello,
> > > >
> > > >   we are faced with an issue where a changed sized romfile
> > > > (efi-virtio.rom) fails live migration. Do qemu load this rom from its
> > > > current host only? If yes, why cannot sync this from source vm?
> >
> > Hi,
> >   For migration to work the ROM has to be the same size on the source
> > and destination.
> >
> >   The problem is that whne the destination starts up it allocates the
> > size of the ROM based on the size of the file;  but then the migration
> > comes along and tries to copy the data from the source machine into that
> > allocation; and isn't sure what should happen when it doesn't quite fit.
> >
> >   There is some variation allowed (I think the allocated size gets
> > rounded up, maybe to the next power of 2); but you still hit problems
> > wehn the ROM size crosses certain thresholds.
> >
> >   In the latest qemu, a 'romsize' property was added (see git commit
> > 08b1df8ff463e72b0875538fb991d5393047606c ); that lets you specifiy a
> > size that's big enough to hold some space for future expansion - e.g.
> > lets say your ROM is currently 300k, you might specify romsize=512k
> > and then it doesn't matter what size the actual file is, we'll always
> > allocate 512k, and as long as the file is less than 512k migration will
> > work.
> >
> >   The more manual way to do that, is to arrange for your files to be
> > padded to a larger boundary so that you leave room for growth.
> > Some distros have done that for a while.
> >
> > Dave
> >
> > > > thank you.
> > > >
> > > > --
> > > >
> > > > Best Regards,
> > > >
> > > > Jiatong Shen
> > >
> > --
> > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
> >
> >
> 
> -- 
> 
> Best Regards,
> 
> Jiatong Shen
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK