cdrom scsi passthough not working well

2020-10-27 Thread daggs
Greetings,

I have a vm running under qemu 5.1.0 with a cdrom scsi passthough into it.
I can eject the device in and out but when I insert a disc, is isn't detected 
and the dmesg on the guest is filled with these prints:
[384216.443262] sr 0:0:0:0: ioctl_internal_command return code = 802
[384216.443268] sr 0:0:0:0: Sense Key : 0xb [current]
[384216.443272] sr 0:0:0:0: ASC=0x0 ASCQ=0x6
[384218.504142] sr 0:0:0:0: ioctl_internal_command return code = 802
[384218.504150] sr 0:0:0:0: Sense Key : 0xb [current]
[384218.504153] sr 0:0:0:0: ASC=0x0 ASCQ=0x6
[384220.561302] sr 0:0:0:0: ioctl_internal_command return code = 802
[384220.561308] sr 0:0:0:0: Sense Key : 0xb [current]
[384220.561312] sr 0:0:0:0: ASC=0x0 ASCQ=0x6

the vm is uefi q35 based, generated by libvirt 6.8.0, the cdrom part is this:
-blockdev 
{"driver":"host_device","filename":"/dev/sg0","node-name":"libvirt-2-backend","read-only":true}
-device 
scsi-generic,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=libvirt-2-backend,id=hostdev0

id there something bad with the config or have I encountered a bug?

Thanks,

Dagg.



image works in native but not in vm when -cpu is set to host

2020-11-27 Thread daggs
Greetings.

I have an image I've created with a bunch of chost flags which works on my 
machine when it comes to native boot.
if I take that same image into a vm managed via libvirt, I get kernel panic.
I'd assume that something is missing from my vm config, question is what and 
what I can do about it?
here is the flags part of lscpu in native and vm: https://dpaste.com/3TR8QJ5G8
the qemu cmd is: /usr/bin/qemu-system-x86_64 -name 
guest=streamer-vm-q35,debug-threads=on -S -object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-streamer-vm-q35/master-key.aes
 -blockdev 
{"driver":"file","filename":"/usr/share/qemu/edk2-x86_64-secure-code.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}
 -blockdev 
{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}
 -blockdev 
{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/streamer-vm-q35_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}
 -blockdev 
{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}
 -machine 
pc-q35-5.0,accel=kvm,usb=off,smm=on,dump-guest-core=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format
 -cpu host,migratable=on -m 7168 -overcommit mem-lock=off -smp 
2,sockets=1,dies=1,cores=1,threads=2 -uuid 4fb1463b-837c-40fc-a760-a69afc040a1a 
-display none -no-user-config -nodefaults -chardev 
socket,id=charmonitor,fd=27,server,nowait -mon 
chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global 
kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global 
ICH9-LPC.disable_s3=1 -global ICH9-LPC.disable_s4=1 -boot strict=on -device 
i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1e -device 
pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x0 -device 
pcie-root-port,port=0x8,chassis=3,id=pci.3,bus=pcie.0,multifunction=on,addr=0x1 
-device pcie-root-port,port=0x9,chassis=4,id=pci.4,bus=pcie.0,addr=0x1.0x1 
-device pcie-root-port,port=0xa,chassis=5,id=pci.5,bus=pcie.0,addr=0x1.0x2 
-device pcie-root-port,port=0xb,chassis=6,id=pci.6,bus=pcie.0,addr=0x1.0x3 
-device pcie-root-port,port=0xc,chassis=7,id=pci.7,bus=pcie.0,addr=0x1.0x4 
-device qemu-xhci,id=usb,bus=pci.4,addr=0x0 -device 
virtio-scsi-pci,id=scsi0,bus=pci.2,addr=0x1 -blockdev 
{"driver":"file","filename":"/home/streamer/streamer.img.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}
 -blockdev 
{"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null}
 -device 
virtio-blk-pci,bus=pci.5,addr=0x0,drive=libvirt-1-format,id=virtio-disk0,bootindex=1
 -netdev tap,fd=30,id=hostnet0 -device 
e1000e,netdev=hostnet0,id=net0,mac=52:54:00:5a:4c:8c,bus=pci.3,addr=0x0 
-chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 
-blockdev 
{"driver":"host_device","filename":"/dev/sg0","node-name":"libvirt-2-backend","read-only":true}
 -device 
scsi-generic,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=libvirt-2-backend,id=hostdev0
 -device 
vfio-pci,host=:00:02.0,id=hostdev1,bus=pci.7,addr=0x0,romfile=/home/streamer/gpu-8086:5902-uefi.rom
 -device vfio-pci,host=:00:1f.3,id=hostdev2,bus=pci.2,addr=0x2 -device 
usb-host,hostbus=1,hostaddr=3,id=hostdev3,bus=usb.0,port=1 -device 
usb-host,id=hostdev4,bus=usb.0,port=2 -device 
virtio-balloon-pci,id=balloon0,bus=pci.6,addr=0x0 -sandbox 
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg 
timestamp=on

if I build the image with the default flags (march=x86-64), the vm boots well.
my new chost is -O2 -pipe -march=skylake -mabm -mno-adx -mno-avx -mno-avx2 
-mno-bmi -mno-bmi2 -mno-f16c -mno-fma -mno-xsave -mno-xsavec -mno-xsaveopt 
-mno-xsaves -mno-sgx
the cpu is Intel(R) Pentium(R) CPU G4560 @ 3.50GHz

thoughts?

Thanks,

Dagg.




Re: image works in native but not in vm when -cpu is set to host

2020-11-27 Thread daggs
Greetings Nerijus,

> Sent: Friday, November 27, 2020 at 11:34 AM
> From: "Nerijus Baliunas via" 
> To: qemu-discuss@nongnu.org
> Subject: Re: image works in native but not in vm when -cpu is set to host
>
> Please provide them by text here, the link does not work. Regards, Nerijus
here:
vm:
Architecture:x86_64
CPU op-mode(s):  32-bit, 64-bit
Byte Order:  Little Endian
Address sizes:   40 bits physical, 48 bits virtual
CPU(s):  2
On-line CPU(s) list: 0,1
Thread(s) per core:  2
Core(s) per socket:  1
Socket(s):   1
NUMA node(s):1
Vendor ID:   GenuineIntel
CPU family:  6
Model:   158
Model name:  Intel(R) Pentium(R) CPU G4560 @ 3.50GHz
Stepping:9
CPU MHz: 3503.998
BogoMIPS:7010.99
Virtualization:  VT-x
Hypervisor vendor:   KVM
Virtualization type: full
L1d cache:   32K
L1i cache:   32K
L2 cache:4096K
L3 cache:16384K
NUMA node0 CPU(s):   0,1
Flags:   fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm 
constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni 
pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt 
tsc_deadline_timer aes xsave rdrand hypervisor lahf_lm abm 3dnowprefetch 
cpuid_fault invpcid_single pti ibrs ibpb stibp tpr_shadow vnmi flexpriority ept 
vpid ept_ad fsgsbase tsc_adjust smep erms invpcid mpx rdseed smap clflushopt 
xsaveopt xsavec xgetbv1 xsaves arat umip arch_capabilities
native:
Architecture:x86_64
CPU op-mode(s):  32-bit, 64-bit
Byte Order:  Little Endian
Address sizes:   39 bits physical, 48 bits virtual
CPU(s):  4
On-line CPU(s) list: 0-3
Thread(s) per core:  2
Core(s) per socket:  2
Socket(s):   1
NUMA node(s):1
Vendor ID:   GenuineIntel
CPU family:  6
Model:   158
Model name:  Intel(R) Pentium(R) CPU G4560 @ 3.50GHz
Stepping:9
CPU MHz: 1869.666
CPU max MHz: 3500.
CPU min MHz: 800.
BogoMIPS:6999.82
Virtualization:  VT-x
L1d cache:   64 KiB
L1i cache:   64 KiB
L2 cache:512 KiB
L3 cache:3 MiB
NUMA node0 CPU(s):   0-3
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf:  Mitigation; PTE Inversion; VMX conditional 
cache flushes, SMT vulnerable
Vulnerability Mds:   Vulnerable: Clear CPU buffers attempted, no 
microcode; SMT vulnerable
Vulnerability Meltdown:  Mitigation; PTI
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1:Mitigation; usercopy/swapgs barriers and 
__user pointer sanitization
Vulnerability Spectre v2:Mitigation; Full generic retpoline, IBPB 
conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Vulnerable: No microcode
Vulnerability Tsx async abort:   Not affected
Flags:   fpu vme de pse tsc msr pae mce cx8 apic sep 
mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe 
syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good 
nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl 
vmx est tm2 ssse3 sdbg cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt 
tsc_deadline_timer aes xsave rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb 
invpcid_single pti ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad 
fsgsbase tsc_adjust smep erms invpcid mpx rdseed smap clflushopt intel_pt 
xsaveopt xsavec xgetbv1 xsaves dtherm arat pln pts hwp hwp_notify 
hwp_act_window hwp_epp

thanks for the help.

Dagg.



Re: image works in native but not in vm when -cpu is set to host

2020-11-28 Thread daggs
Greetings,

> Sent: Saturday, November 28, 2020 at 2:32 AM
> From: "Ken Moffat via" 
> To: qemu-discuss@nongnu.org
> Subject: Re: image works in native but not in vm when -cpu is set to host
>
> On Sat, Nov 28, 2020 at 02:09:41AM +0200, Nerijus Baliunas via wrote:
> > On Fri, 27 Nov 2020 23:17:52 + Ken Moffat via  
> > wrote:
> >
> > > > > Please provide them by text here, the link does not work. Regards, 
> > > > > Nerijus
> > >
> > > You didn't ask the other question (details of the panic: might be
> > > missing driver, might be invalid opcode as far as qemu is concerned,
> > > might be something else entirely.
> >
> > I did. I don't know why OP did not provide them yet.
> >
> > Regards,
> > Nerijus
>
> Sorry, I can't type.  What I meant to write was "You didn't *answer*
> the other question." (and 'You' was for the OP).
>

you are correct, I haven't included the trace, a mistake in my part, I'll do 
that and replay the image.

Dagg.



Re: image works in native but not in vm when -cpu is set to host

2020-12-05 Thread daggs



> Sent: Saturday, November 28, 2020 at 10:31 AM
> From: "daggs" 
> To: zarniwh...@ntlworld.com
> Cc: qemu-discuss@nongnu.org
> Subject: Re: image works in native but not in vm when -cpu is set to host
>
> Greetings,
>
> you are correct, I haven't included the trace, a mistake in my part, I'll do 
> that and replay the image.
>
> Dagg.

here it is: https://ibb.co/M1tRY0h




how does qemu generate the path from 60-edk2-x86_64.json?

2021-01-30 Thread daggs
Greetings,

I was wondering how qemu generates the path for edk2-x86_64-code.fd from from 
60-edk2-x86_64.json
the file contains this: "filename": "share/qemu/edk2-x86_64-code.fd",
however the real path is /usr/share/qemu/edk2-x86_64-code.fd.

where does the /usr prefix comes?
the reason I'm asking is because I have two servers with the same version 
(5.2.0), both running gentoo.
when I'm trying to boot an uefi guest on one, it works, but on the other I'm 
getting this error:
error: Path 'share/qemu/edk2-x86_64-secure-code.fd' is not accessible: No such 
file or directory
if I change the path to /usr/share/qemu/edk2-x86_64-code.fd in 
edk2-x86_64-code.fd, the vm boots again,

I've found this https://bugs.gentoo.org/766743 however I'd assume this will 
happen every time or when it happens nothing will depend on /usr

any ideas?

Thanks,

Dagg.




Re: how does qemu generate the path from 60-edk2-x86_64.json?

2021-02-01 Thread daggs
Greetings Philippe,

> Sent: Monday, February 01, 2021 at 10:53 AM
> From: "Philippe Mathieu-Daudé" 
> To: "daggs" , qemu-discuss@nongnu.org
> Cc: "Jannik Glückert" , "Sergei Trofimovich" 
> 
> Subject: Re: how does qemu generate the path from 60-edk2-x86_64.json?
> 
> Sergei sent a fix provided by Jannik for this problem:
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg53.html
> 
> Regards,
> 
> Phil.
> 
> 
> 

Thanks.

I still don't understand why I have two systems with the same version that only 
one reproduces the issue

Dagg.



Re: cdrom scsi passthough not working well

2021-02-03 Thread daggs
Greetings Philippe,

> Sent: Wednesday, February 03, 2021 at 6:48 PM
> From: "Philippe Mathieu-Daudé" 
> To: "daggs" , qemu-discuss@nongnu.org
> Cc: "qemu-devel" , "Qemu-block" 
> Subject: Re: cdrom scsi passthough not working well
>
> Cc'ing qemu-block@ developers.
> 
> On 10/28/20 6:18 AM, daggs wrote:
> > Greetings,
> > 
> > I have a vm running under qemu 5.1.0 with a cdrom scsi passthough into it.
> > I can eject the device in and out but when I insert a disc, is isn't 
> > detected and the dmesg on the guest is filled with these prints:
> > [384216.443262] sr 0:0:0:0: ioctl_internal_command return code = 802
> > [384216.443268] sr 0:0:0:0: Sense Key : 0xb [current]
> > [384216.443272] sr 0:0:0:0: ASC=0x0 ASCQ=0x6
> > [384218.504142] sr 0:0:0:0: ioctl_internal_command return code = 802
> > [384218.504150] sr 0:0:0:0: Sense Key : 0xb [current]
> > [384218.504153] sr 0:0:0:0: ASC=0x0 ASCQ=0x6
> > [384220.561302] sr 0:0:0:0: ioctl_internal_command return code = 802
> > [384220.561308] sr 0:0:0:0: Sense Key : 0xb [current]
> > [384220.561312] sr 0:0:0:0: ASC=0x0 ASCQ=0x6
> > 
> > the vm is uefi q35 based, generated by libvirt 6.8.0, the cdrom part is 
> > this:
> > -blockdev 
> > {"driver":"host_device","filename":"/dev/sg0","node-name":"libvirt-2-backend","read-only":true}
> > -device 
> > scsi-generic,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=libvirt-2-backend,id=hostdev0
> > 
> > id there something bad with the config or have I encountered a bug?
> > 
> > Thanks,
> > 
> > Dagg.
> > 
> 

I don't have this issue anymore, however, when I enter the cdrom, the read 
light is blinking for a long time and when it is done, the disc isn't detected.
/dev/sr0 exists, both the drive and cd are good

Dagg.
>



qemu vm stopped working after upgrade

2021-07-17 Thread daggs
Greetings,

yesterday I performed a long overdue upgrade of my server. I've ended up with 
one of my vms not working.
kernel, ucode, qemu and libvirt were upgraded however I've ruled them all out 
as possible suspect after reverted to the previous versions so qemu version is 
not that important.
I'm using version 6.0.0

after investigation, I got to these two scripts, one works, the other doesn't.
this is the working one:
qemu-system-x86_64 \
-machine pc-q35-5.0,accel=kvm,usb=off,smm=on,dump-guest-core=off \
-cpu host,migratable=on \
-m 15360 \
-smp 4,sockets=1,dies=1,cores=2,threads=2 \
-drive file=/home/streamer/streamer.img.qcow2.new,if=virtio,format=qcow2 \
-device 
vfio-pci,host=:00:02.0,romfile=/home/streamer/gpu-8086:5912-uefi.rom,multifunction=on
 \
-device vfio-pci,host=:00:1f.3,multifunction=on \
-usb \
-device usb-host,vendorid=0x046d,productid=0xc52e \
-device usb-host,vendorid=0x2548,productid=0x1002 \
-display none \
-netdev tap,id=hostnet0,ifname=virtsw-streamer,script=no,downscript=no \
-device e1000e,netdev=hostnet0,id=net0,mac=52:54:00:5a:4c:8c \
-blockdev 
'{"driver":"file","filename":"/usr/share/edk2-ovmf/OVMF_CODE.secboot.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}'
 \
-blockdev 
'{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/streamer-vm-q35_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}'

this doesn't work, it is extrapolation of cmd line that is executed by libvirt.:
LC_ALL=C \
PATH=/bin:/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/opt/bin
 \
HOME=/var/lib/libvirt/qemu/domain-2-streamer-vm-q35 \
USER=root \
XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-2-streamer-vm-q35/.local/share \
XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-2-streamer-vm-q35/.cache \
XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-2-streamer-vm-q35/.config \
/usr/bin/qemu-system-x86_64 \
-name guest=streamer-vm-q35,debug-threads=on \
-S \
-machine 
pc-q35-5.0,accel=kvm,usb=off,smm=on,dump-guest-core=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format
 \
-cpu host,migratable=on \
-smp 4,sockets=1,dies=1,cores=2,threads=2 \
-m 15360 \
-blockdev 
'{"driver":"file","filename":"/usr/share/edk2-ovmf/OVMF_CODE.secboot.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}'
 \
-blockdev 
'{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}'
 \
-blockdev 
'{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/streamer-vm-q35_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}'
 \
-blockdev 
'{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}'
 \
-display none \
-device 
vfio-pci,host=:00:02.0,id=hostdev1,romfile=/home/streamer/gpu-8086:5912-uefi.rom,multifunction=on
 \
-blockdev 
'{"driver":"file","filename":"/home/streamer/streamer.img.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}'
 \
-blockdev 
'{"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null}'
 \
-msg timestamp=on

I cannot seem to file why, can anyone assist?

Thanks,

Dagg.



binding to tap interface

2021-07-24 Thread daggs
Greetings,

after yet again another llbvirt which messed up my vms to the point that 
downgrade doesn't fixes it, I'm toying again with using scripts to setup and 
launch my vms.
my main problem in the past was the network.
my router is in side the vm, I need to bring up a virtual switch, connect the 
router and another vm to it and connect the host and the router vm.
all using tap interfaces.
here is what I setup:
dagg@NCC-5001D ~/workspace/virt_nix_scripts $ brctl show virt_switch
bridge name bridge id   STP enabled interfaces
virt_switch 8000.12c886c96cc5   no  vsw_router
vsw_streamer
dagg@NCC-5001D ~/workspace/virt_nix_scripts $ ip a
5: veth:  mtu 1500 qdisc pfifo_fast state 
DOWN group default qlen 1000
link/ether 4e:13:0a:ec:01:b5 brd ff:ff:ff:ff:ff:ff
6: virt_switch:  mtu 1500 qdisc noop state DOWN group 
default qlen 1000
link/ether 12:c8:86:c9:6c:c5 brd ff:ff:ff:ff:ff:ff
8: vsw_streamer:  mtu 1500 qdisc pfifo_fast 
master virt_switch state DOWN group default qlen 1000
link/ether 86:8f:a9:e5:a3:4d brd ff:ff:ff:ff:ff:ff
9: vsw_router:  mtu 1500 qdisc pfifo_fast 
master virt_switch state DOWN group default qlen 1000
link/ether 4a:a0:ea:44:8b:c6 brd ff:ff:ff:ff:ff:ff

and now I'm missing the proper nic config in the qemu line however I was never 
able to configure it properly.
what is the proper nic params?

Thanks,

Dagg.



debugging guest os running atop of eqmu+kvm

2023-03-11 Thread daggs
Greetings,

I have a libvirt+kvm+qemu setup running libreelec as guest.
latest guest os update exposed a bug which joins another bug existing already 
(one I've worked around for now).
the two bugs are as follows:
1. when the nic is set to virtio, dhcp isn't acquired, manual ip config doesn't 
work too.
2. the system is stuttering every few seconds, even in ssh.

dmesg has no visible errors, top on the vm and atop on the host doesn't show 
cpu and memory usage that might explain it.
so I'm left with a bug in the os, I assume that I'm not the first one 
encountering this issue, so I'd like to know if there are recommendation what 
to nable in guest?
I can rebuild the system entirely and change anything I want in the kernel. the 
guest kernel is 6.1.14

Thanks,

Dagg



Re: Creating virtual routers

2023-03-18 Thread daggs
Greetings Richard,

> Sent: Friday, March 17, 2023 at 8:43 PM
> From: "Lane" 
> To: qemu-discuss@nongnu.org
> Subject: Creating virtual routers
>
> HI,
> 
> I'd like to create two virtual routers where each router gives access
> to it's own virtual LAN and then add vm's to each LAN. This would all
> be on my localhost.
> 
> lan1 <---> r1 <---> r2 <---> lan2
> 
> Can I do this with Qemu, and if so, can someone point me in the right
> direction on what I need to do?
> 
> Richard
> 
> 

my home network is based on a libvirt+qemu vm with 5 nics as pass-through and 
one virtual nic binded to a virtual switch for host<=>vm connection and wifi 
based on usb running openwrt
so this is doable.

Dagg



virtio socket accessible from unprivilsed docker on a vm

2023-03-26 Thread daggs
Greetings,

first, feel free to point me to another location incase this is the wrong place 
to ask.

I have a qemu + kvm vm with a docker server inside of it. each running docker 
is a unprivileged one.
I want to communicate with the vm from within each docker, from what I know, 
unprivileged docker doesn't
allow access to dev nodes, meaning virtio console is out of the question.
I wanted to know if I write a C program that connects to the vm's virtio socket 
and run it from each docker, will it work?

Thanks.

Dagg



[no subject]

2023-05-01 Thread daggs
Greetings,

I'm trying to boot a VM with pi of the on sound card and the gpu using 
libvirt[1] and qemu 
I have a script that loads the needed KVM mods, starts libvirt and press the HDD
Then starts the VM with virus.

I've configured qemu hooks to run scripts in the relevant events.
In prepare, I disable active ui, unbind the screen and consoles, unload all the 
mods of the hw and load the vfio mods with the IDs of the devs and allow unsafe 
intrs.
Then libvirt tries to start the VM and fails with the this error:
error: Failed to start domain 'win_user_home'
error: internal error: qemu unexpectedly closed the monitor: 
2023-05-01T14:59:49.968252Z qemu-system-x86_64: -device 
{"driver":"vfio-pci","host":":05:00.0","id":"hostdev0","bus":"pci.5","multifunction":true,"addr":"0x0"}:
 vfio :05:00.0: failed to setup container for group 14: Failed to set iommu 
for container: Operation not permitted

The GPU has its own iommu group[2] and I pass the soundcard too.

Why am I getting this error?
I did got the VM to start in earlier more simple setup with less devices so I 
know it boots.

thanks,

Dagg

1. xml file:

  win_user_home
  f17be092-0fcb-47b6-b717-d4b8052ed289
  
http://libosinfo.org/xmlns/libvirt/domain/1.0";>
  http://microsoft.com/win/11"/>

  
  24582834
  24582834
  12
  
hvm

  
  

/usr/share/edk2-ovmf/OVMF_CODE.secboot.fd
/var/lib/libvirt/qemu/nvram/win_user_home_VARS.fd


  
  



  
  
  
  
  
  
  
  
  


  




  
  



  
  





  
  destroy
  restart
  destroy
  


  
  
/usr/bin/qemu-system-x86_64

  
  
  
  


  
  
  
  
  


  


  



  
  
  


  
  
  


  
  
  


  
  
  


  
  
  


  
  
  


  
  
  


  
  
  
  


  

  


  




  



  

  
  


  

  
  


  

  
  



  

  


2. iommu list:
IOMMU Group 0:
00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 1:
00:01.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse GPP Bridge [1022:1483]
IOMMU Group 2:
00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 3:
00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 4:
00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse GPP Bridge [1022:1483]
IOMMU Group 5:
00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 6:
00:05.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 7:
00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 8:
00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
IOMMU Group 9:
00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 10:
00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
IOMMU Group 11:
00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus 
Controller [1022:790b] (rev 61)
00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC 
Bridge [1022:790e] (rev 51)
IOMMU Group 12:
00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Matisse/Vermeer Data Fabric: Device 18h; Function 0 [1022:1440]
00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Matisse/Vermeer Data Fabric: Device 18h; Function 1 [1022:1441]
00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Matisse/Vermeer Data Fabric: Device 18h; Function 2 [1022:1442]
00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Matisse/Vermeer Data Fabric: Device 18h; Function 3 [1022:1443]
00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Matisse/Vermeer Data Fabric: Device 18h; Function 4 [1022:1444]
00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Matisse/Vermeer Data Fabric: Device 18h; Function 5 [1022:1445]
00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc

invoking qemu-bridge-helper manually

2024-10-26 Thread daggs
Greetings,

I have two vms, one is a router and another is a utility vm running using 
libvirt+qemu.
they are connected via a virtual switch which works great e.g. the util vm asks 
for an ip, gets one and interacts with the vm and the outside world.
now I want to connect the host to that virt switch, I'm able to create a tap 
and assign it to the virt switch, but the tap remains in no carrier state as 
there is no userspace program to manage the data transfer.
the virt switch has two tap devices (for each vm connected to the virt switch) 
which I assume are created using the util stated above.
so I was wondering if there is a way to invoke this util manually so qemu will 
handle the data transfer as it does so already and save me the need to reinvent 
the wheel.
from what I can see, the only issue is the unixfd value passed from the invoker.

Thanks,

Dagg



vm lagging after disrto upgrade

2025-05-09 Thread daggs
Greetings,

I'm running libreelec inside a qemu + libvirt vm, I've upgraded to latest 
libreelec (12.0.2) and now I'm experiencing lags in both ssd and ui
I thought it might be that the vm is too resource intensive but it doesn't 
seems like, top in the vm and in the host are around 50% at most.
qemu version is 9.1.2, the cmd line running is: 32587 streamer 10:51 
/usr/bin/qemu-system-x86_64-nameguest=streamer,debug-threads=on-S-object{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/home/streamer/.config/libvirt/qemu/lib/domain-1-streamer/master-key.aes"}-blockdev{"driver":"file","filename":"/usr/share/qemu/edk2-x86_64-secure-code.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}-blockdev{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}-blockdev{"driver":"file","filename":"/home/streamer/.config/libvirt/qemu/nvram/streamer_VARS.fd","node-name":"libvirt-pflash1-storage","read-only":false}-machinepc-q35-5.0,usb=off,vmport=off,smm=on,dump-guest-core=off,memory-backend=pc.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-storage,hpet=off,acpi=on-accelkvm-cpuhost,migratable=on-globaldriver=cfi.pflash01,property=secure,value=on-msize=15728640k-object{"qom-type":"memory-backend-ram","id":"pc.ram","size":16106127360}-overcommitmem-lock=off-smp4,sockets=1,dies=1,clusters=1,cores=2,threads=2-uuidc5208cc8-c4ae-4b52-a54a-752b6d861aff-displaynone-no-user-config-nodefaults-chardevsocket,id=charmonitor,fd=16,server=on,wait=off-monchardev=charmonitor,id=monitor,mode=control-rtcbase=utc,driftfix=slew-globalkvm-pit.lost_tick_policy=delay-no-shutdown-globalICH9-LPC.disable_s3=1-globalICH9-LPC.disable_s4=1-bootstrict=on-device{"driver":"pcie-root-port","port":16,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x1"}-device{"driver":"pcie-root-port","port":17,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x1.0x1"}-device{"driver":"pcie-root-port","port":18,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x1.0x2"}-device{"driver":"pcie-root-port","port":19,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x1.0x3"}-device{"driver":"pcie-root-port","port":20,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x1.0x4"}-device{"driver":"pcie-root-port","port":21,"chassis":6,"id":"pci.6","bus":"pcie.0","addr":"0x1.0x5"}-device{"driver":"pcie-root-port","port":22,"chassis":7,"id":"pci.7","bus":"pcie.0","addr":"0x1.0x6"}-device{"driver":"pcie-pci-bridge","id":"pci.8","bus":"pci.7","addr":"0x0"}-device{"driver":"pcie-root-port","port":23,"chassis":9,"id":"pci.9","bus":"pcie.0","addr":"0x3.0x2"}-device{"driver":"pcie-root-port","port":8,"chassis":10,"id":"pci.10","bus":"pcie.0","multifunction":true,"addr":"0x3"}-device{"driver":"pcie-root-port","port":9,"chassis":11,"id":"pci.11","bus":"pcie.0","addr":"0x3.0x1"}-device{"driver":"qemu-xhci","p2":15,"p3":15,"id":"usb","bus":"pcie.0","addr":"0x14"}-device{"driver":"virtio-scsi-pci","id":"scsi0","bus":"pcie.0","addr":"0x15"}-blockdev{"driver":"file","filename":"/home/streamer/LibreELEC-Generic.x86_64-kvm.img.qcow2","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}-blockdev{"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}-device{"driver":"virtio-blk-pci","bus":"pcie.0","addr":"0x17","drive":"libvirt-2-format","id":"virtio-disk0","bootindex":1}-blockdev{"driver":"host_cdrom","filename":"/dev/sr0","node-name":"libvirt-1-storage","read-only":true}-device{"driver":"scsi-cd","bus":"scsi0.0","channel":0,"scsi-id":0,"lun":0,"device_id":"drive-scsi0-0-0-0","drive":"libvirt-1-storage","id":"scsi0-0-0-0"}-netdev{"type":"tap","fd":"19","id":"hostnet0"}-device{"driver":"e1000e","netdev":"hostnet0","id":"net0","mac":"xx:xx:xx:xx:xx:xx","bus":"pci.1","addr":"0x0"}-chardevpty,id=charserial0-device{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}-audiodev{"id":"audio1","driver":"none"}-globalICH9-LPC.noreboot=off-watchdog-actionreset-device{"driver":"vfio-pci","host":":00:1f.3","id":"hostdev0","bus":"pcie.0","addr":"0x1f.0x4"}-device{"driver":"usb-host","id":"hostdev1","bus":"usb.0","port":"2"}-device{"driver":"usb-host","id":"hostdev2","bus":"usb.0","port":"1"}-device{"driver":"usb-host","id":"hostdev3","bus":"usb.0","port":"3"}-device{"driver":"vfio-pci","host":":00:02.0","id":"hostdev4","bus":"pcie.0","multifunction":true,"addr":"0x2","rombar":0}-device{"driver":"virtio-balloon-pci","id":"balloon0","bus":"pci.4","addr":"0x0"}-object{"qom-type":"rng-random","id":"objrng0","filename":"/dev/urandom"}-device{"driver":"virtio-rng-pci","rng":"objrng0","id":"rng0","bus":"pci.2","addr":"0x0"}-sandboxon,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny-msgtimestamp=on

vm's guest kernel is 6.6.7, any ideas why I see such lag and how I can 
fix/debug it?