Re: Applying Throttle Block Filter via QMP Command

2025-01-08 Thread Henry lol
I'm sorry for giving you the wrong information.
I didn't use the -drive parameter in QEMU, but the -blockdev parameter instead.

Below are the commands I used in the scenario where I/O performance
remains the same

1-1 execute the qemu process with
...
-object throttle-group,id=tg,x-bps-total=10485760 \
-blockdev 
'{"driver":"qcow2","node-name":"qcow2-node","file":{"driver":"file","filename":"/path/to/file.qcow2"}}'
\
-device 
virtio-blk-pci,scsi=off,drive=qcow2-node,id=did,bootindex=1,bus=pci.0,addr=0x05,serial=1234

1-2 run the blockdev-add command via qmp socket
{
  "execute":"blockdev-add",
  "arguments":{
"driver": "throttle",
"node-name": "throttle-node",
"throttle-group": "tg",
"file": "qcow2-node"
}}

scenario where throttle works as expected

2-1 execute the qemu process with
...
-object throttle-group,id=tg,x-bps-total=10485760 \
-blockdev 
'{"driver":"throttle","throttle-group":"tg","node-name":"throttle-node","file":{
  
"driver":"qcow2","node-name":"qcow2-node","file":{"driver":"file","filename":"/path/to/file.qcow2"}
}}' \
-device 
virtio-blk-pci,scsi=off,drive=qcow2-node,id=did,bootindex=1,bus=pci.0,addr=0x05,serial=1234

2025년 1월 8일 (수) 오후 6:45, Henry lol 님이 작성:
>
> Hello,
>
> I want to apply a throttle block filter using the QMP command, but it
> doesn't seem to work as the I/O performance remains the same.
>
> Are there any additional steps I need to follow?
> I predefined the throttle-group object and block device in the QEMU
> parameters and then used the blockdev-add QMP command to apply the
> filter, as described in the link
> - https://github.com/qemu/qemu/blob/master/docs/throttle.txt#L315-L322
>
> Additionally, I’ve confirmed that the filter works well when defined
> in the QEMU -drive parameter instead of using the QMP command.
>
> thanks,



Re: Applying Throttle Block Filter via QMP Command

2025-01-08 Thread Henry lol
Actually, the above is a simple scenario, and I want to apply a filter
to an overlay image as following.

1. Execute the QEMU process as in 1-1 or 2-1.
2. Run the command to create the overlay image
{"execute":"blockdev-snapshot-sync","arguments":{"node-name":
"qcow2-node", "snapshot-file":"/path/to/overlay.qcow2", "format":
"qcow2", "snapshot-node-name": "overlay-node"}}

3. run the blockdev-add command to apply filter
{
  "execute":"blockdev-add",
  "arguments":{
"driver": "throttle",
"node-name": "throttle-node2",
"throttle-group": "tg",
"file": "overlay-node"
}}

Sincerely,

2025년 1월 9일 (목) 오전 11:02, Henry lol 님이 작성:
>
> I'm sorry for giving you the wrong information.
> I didn't use the -drive parameter in QEMU, but the -blockdev parameter 
> instead.
>
> Below are the commands I used in the scenario where I/O performance
> remains the same
>
> 1-1 execute the qemu process with
> ...
> -object throttle-group,id=tg,x-bps-total=10485760 \
> -blockdev 
> '{"driver":"qcow2","node-name":"qcow2-node","file":{"driver":"file","filename":"/path/to/file.qcow2"}}'
> \
> -device 
> virtio-blk-pci,scsi=off,drive=qcow2-node,id=did,bootindex=1,bus=pci.0,addr=0x05,serial=1234
>
> 1-2 run the blockdev-add command via qmp socket
> {
>   "execute":"blockdev-add",
>   "arguments":{
> "driver": "throttle",
> "node-name": "throttle-node",
> "throttle-group": "tg",
> "file": "qcow2-node"
> }}
>
> scenario where throttle works as expected
>
> 2-1 execute the qemu process with
> ...
> -object throttle-group,id=tg,x-bps-total=10485760 \
> -blockdev 
> '{"driver":"throttle","throttle-group":"tg","node-name":"throttle-node","file":{
>   
> "driver":"qcow2","node-name":"qcow2-node","file":{"driver":"file","filename":"/path/to/file.qcow2"}
> }}' \
> -device 
> virtio-blk-pci,scsi=off,drive=qcow2-node,id=did,bootindex=1,bus=pci.0,addr=0x05,serial=1234
>
> 2025년 1월 8일 (수) 오후 6:45, Henry lol 님이 작성:
> >
> > Hello,
> >
> > I want to apply a throttle block filter using the QMP command, but it
> > doesn't seem to work as the I/O performance remains the same.
> >
> > Are there any additional steps I need to follow?
> > I predefined the throttle-group object and block device in the QEMU
> > parameters and then used the blockdev-add QMP command to apply the
> > filter, as described in the link
> > - https://github.com/qemu/qemu/blob/master/docs/throttle.txt#L315-L322
> >
> > Additionally, I’ve confirmed that the filter works well when defined
> > in the QEMU -drive parameter instead of using the QMP command.
> >
> > thanks,



Re: CS-4231 on SS-5

2025-01-08 Thread JF Sebastian
Hi Mark

 

That's a good point, I'll try that then: checking first the status of the existing PC CS4231 then I'll check the existing driver code and see how things look. I'm trying to get my hands on a real SS-5 to dispell some doubts as well.

Thank you for your time once again.

Best regards,

 

Seb.

 

 
 

Sent: Wednesday, January 08, 2025 at 11:45 AM
From: "Mark Cave-Ayland" 
To: "JF Sebastian" 
Cc: "Peter Maydell" , qemu-discuss@nongnu.org
Subject: Re: CS-4231 on SS-5

On 06/01/2025 05:16, JF Sebastian wrote:

> In short, I'm wondering if the solution for sound on the SS-5 in QEMU would then be a
> completely new undertaking from scratch, or if in fact, it would be a matter of
> glueing existing work, the majority of which done already.
> Thank you for your time.
> Best regards
>
> Seb.

Hi Seb,

I'm not really sure of the status of the PC CS4231 driver in Linux and/or QEMU, so a
good place to start would be to test it with the latest QEMU. If that works (and it
is shown that the SS-5 and PC drivers in the kernel can be combined), then yes, in
theory it should be possible to make use of some of the existing code.


ATB,

Mark.
 






Re: Applying Throttle Block Filter via QMP Command

2025-01-08 Thread Alberto Garcia
On Wed, Jan 08, 2025 at 06:45:54PM +0900, Henry lol wrote:
> I want to apply a throttle block filter using the QMP command, but
> it doesn't seem to work as the I/O performance remains the same.
> 
> Are there any additional steps I need to follow?
> I predefined the throttle-group object and block device in the QEMU
> parameters and then used the blockdev-add QMP command to apply the
> filter, as described in the link
> - https://github.com/qemu/qemu/blob/master/docs/throttle.txt#L315-L322
> 
> Additionally, I’ve confirmed that the filter works well when defined
> in the QEMU -drive parameter instead of using the QMP command.

Can you summarize the commands that you are using?

Simply adding the filter with 'blockdev-add' is not enough, that
creates the backend (the "host" part, i.e. how the block device is
actually emulated) but you also need a frontend (the device that the
guest VM can see, i.e. a SCSI hard drive, an SD card, etc.).

The -drive parameter creates both things (frontend and backend).

See here: 
https://www.linux-kvm.org/images/3/34/Kvm-forum-2013-block-dev-configuration.pdf

Berto



NVME: Multiple IRQs but only failed FLUSH completion visible in CQ

2025-01-08 Thread Maciej Leks
I'm observing unexpected behavior with NVMe command completions in
QEMU. Here's what happens:

I issue a single READ command (NVME_NVM_CMD_READ) with:

CID: 8
NSID: 1
LBA: 0x802
NLB: 1

The trace log shows two IRQ notifications::
pci_nvme_irq_msix raising MSI-X IRQ vector 1
apic_deliver_irq dest 0 dest_mode 0 delivery_mode 0 vector 33 trigger_mode 0
[...]
pci_nvme_irq_msix raising MSI-X IRQ vector 1
apic_deliver_irq dest 0 dest_mode 0 delivery_mode 0 vector 33 trigger_mode 0

However, when checking the completion queue, I only see a failed FLUSH command:

CID: 0
Status: 0x400b (sct=1, sc=11)
This FLUSH command was not issued by my code.

The trace shows this unexpected FLUSH command:
pci_nvme_io_cmd cid 0 nsid 0x0 sqid 1 opc 0x0 opname 'NVME_NVM_CMD_FLUSH'
pci_nvme_enqueue_req_completion cid 0 cqid 1 dw0 0x0 dw1 0x0 status 0x400b

I suspect this might be related to automatic flushing because of the
last CQ element (SQ and CQ len is set to 8) and starting a new cycle,
but I'm not sure if this behavior is correct. Could someone explain
why I'm seeing this FLUSH command and why my READ completion isn't
visible in the CQ?

Full Trace log:
pci_nvme_mmio_write addr 0x1008 data 0x0 size 4
pci_nvme_mmio_doorbell_sq sqid 1 new_tail 0
pci_nvme_io_cmd cid 8 nsid 0x1 sqid 1 opc 0x2 opname 'NVME_NVM_CMD_READ'
pci_nvme_read cid 8 nsid 1 nlb 1 count 512 lba 0x802
pci_nvme_map_prp trans_len 512 len 512 prp1 0x144000 prp2 0x0 num_prps 1
pci_nvme_map_addr addr 0x144000 len 512
pci_nvme_io_cmd cid 0 nsid 0x0 sqid 1 opc 0x0 opname 'NVME_NVM_CMD_FLUSH'
pci_nvme_enqueue_req_completion cid 0 cqid 1 dw0 0x0 dw1 0x0 status 0x400b
pci_nvme_err_req_status cid 0 nsid 0 status 0x400b opc 0x0
pci_nvme_irq_msix raising MSI-X IRQ vector 1
apic_deliver_irq dest 0 dest_mode 0 delivery_mode 0 vector 33 trigger_mode 0
pic_register_write register 0x0b = 0x0
pci_nvme_rw_cb cid 8 blk 'drv0'
pci_nvme_rw_complete_cb cid 8 blk 'drv0'
pci_nvme_enqueue_req_completion cid 8 cqid 1 dw0 0x0 dw1 0x0 status 0x0
pci_nvme_irq_msix raising MSI-X IRQ vector 1
apic_deliver_irq dest 0 dest_mode 0 delivery_mode 0 vector 33 trigger_mode 0
apic_register_write register 0x0b = 0x0

Additional info:
qemu version: 9.1, 9.2.

Best Regards,
Maciek Leks



Re: Emulating SVE on non-SVE host with qemu-system-aarch64

2025-01-08 Thread Mitchell Augustin
Hi Peter,

I don't want to ignore it if you think I've found a bug here - but my
reproducer unfortunately is the VM that was launched with
libvirt/virt-install. If you know of a pure QEMU command to spin up a
new QEMU VM that would attempt to use the "neoverse-v1" CPU model with
KVM (run on a host without sve), that is what I would be searching for
if I wanted to find a reproducer without libvirt - otherwise the
libvirt-generated one is all I have since I typically do everything
through libvirt.
I'm not sure if this is inherently useful to you since it is filled
with a ton of libvirt "stuff" that may not be standalone, but here is
the full qemu command that throws that error when I run "virsh start
maugustin":

/usr/bin/qemu-system-aarch64 -name guest=maugustin,debug-threads=on -S
-object 
{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-2-maugustin/master-key.aes"}
-blockdev 
{"driver":"file","filename":"/usr/share/AAVMF/AAVMF_CODE.ms.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}
-blockdev 
{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}
-blockdev 
{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/maugustin_VARS.fd","node-name":"libvirt-pflash1-storage","read-only":false}
-machine 
virt-9.0,usb=off,gic-version=3,dump-guest-core=off,memory-backend=mach-virt.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-storage,acpi=on
-accel kvm -cpu host -m size=18874368k -object
{"qom-type":"memory-backend-ram","id":"mach-virt.ram","size":19327352832}
-overcommit mem-lock=off -smp 4,sockets=4,cores=1,threads=1 -uuid
4af37efa-506c-484b-b4bf-ba9f6d52bdbe -no-user-config -nodefaults
-chardev socket,id=charmonitor,fd=31,server=on,wait=off -mon
chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-boot strict=on -device
{"driver":"pcie-root-port","port":8,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x1"}
-device 
{"driver":"pcie-root-port","port":9,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x1.0x1"}
-device 
{"driver":"pcie-root-port","port":10,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x1.0x2"}
-device 
{"driver":"pcie-root-port","port":11,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x1.0x3"}
-device 
{"driver":"pcie-root-port","port":12,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x1.0x4"}
-device 
{"driver":"pcie-root-port","port":13,"chassis":6,"id":"pci.6","bus":"pcie.0","addr":"0x1.0x5"}
-device 
{"driver":"pcie-root-port","port":14,"chassis":7,"id":"pci.7","bus":"pcie.0","addr":"0x1.0x6"}
-device 
{"driver":"pcie-root-port","port":15,"chassis":8,"id":"pci.8","bus":"pcie.0","addr":"0x1.0x7"}
-device 
{"driver":"pcie-root-port","port":16,"chassis":9,"id":"pci.9","bus":"pcie.0","multifunction":true,"addr":"0x2"}
-device 
{"driver":"pcie-root-port","port":17,"chassis":10,"id":"pci.10","bus":"pcie.0","addr":"0x2.0x1"}
-device 
{"driver":"pcie-root-port","port":18,"chassis":11,"id":"pci.11","bus":"pcie.0","addr":"0x2.0x2"}
-device 
{"driver":"pcie-root-port","port":19,"chassis":12,"id":"pci.12","bus":"pcie.0","addr":"0x2.0x3"}
-device 
{"driver":"pcie-root-port","port":20,"chassis":13,"id":"pci.13","bus":"pcie.0","addr":"0x2.0x4"}
-device 
{"driver":"pcie-root-port","port":21,"chassis":14,"id":"pci.14","bus":"pcie.0","addr":"0x2.0x5"}
-device 
{"driver":"qemu-xhci","p2":15,"p3":15,"id":"usb","bus":"pci.2","addr":"0x0"}
-device {"driver":"virtio-scsi-pci","id":"scsi0","bus":"pci.3","addr":"0x0"}
-device 
{"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.4","addr":"0x0"}
-blockdev 
{"driver":"file","filename":"/vms/noble-server-cloudimg-arm64.img","node-name":"libvirt-4-storage","auto-read-only":true,"discard":"unmap"}
-blockdev 
{"node-name":"libvirt-4-format","read-only":true,"driver":"qcow2","file":"libvirt-4-storage","backing":null}
-blockdev 
{"driver":"file","filename":"/vms/maugustin-vda.qcow2","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}
-blockdev 
{"node-name":"libvirt-3-format","read-only":false,"driver":"qcow2","file":"libvirt-3-storage","backing":"libvirt-4-format"}
-device 
{"driver":"virtio-blk-pci","bus":"pci.5","addr":"0x0","drive":"libvirt-3-format","id":"virtio-disk0","bootindex":1}
-blockdev 
{"driver":"file","filename":"/vms/maugustin-seed.qcow2","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}
-blockdev 
{"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}
-device 
{"driver":"virtio-blk-pci","bus":"pci.6","addr":"0x0","drive":"libvirt-2-format","id":"virtio-disk1"}
-device 
{"driver":"scsi-cd","bus":"scsi0.0","channel":0,"scsi-id":0,"lun":0,"device_id":"drive-scsi0-0-0-0","id":"scsi0-0-0-0"}
-netdev {"type":"tap","fd":"32","vhost":true,"vhostfd":"35","id":"hostnet0"}
-device 
{"driver":"virtio-net-pci","netdev":"hostnet0","id":"net0","mac":"52:54:00:d2:06:80","bus":"pci.1"

Applying Throttle Block Filter via QMP Command

2025-01-08 Thread Henry lol
Hello,

I want to apply a throttle block filter using the QMP command, but it
doesn't seem to work as the I/O performance remains the same.

Are there any additional steps I need to follow?
I predefined the throttle-group object and block device in the QEMU
parameters and then used the blockdev-add QMP command to apply the
filter, as described in the link
- https://github.com/qemu/qemu/blob/master/docs/throttle.txt#L315-L322

Additionally, I’ve confirmed that the filter works well when defined
in the QEMU -drive parameter instead of using the QMP command.

thanks,



Re: CS-4231 on SS-5

2025-01-08 Thread Mark Cave-Ayland

On 06/01/2025 05:16, JF Sebastian wrote:

In short, I'm wondering if the solution for sound on the SS-5 in QEMU would then be a 
completely new undertaking from scratch, or if in fact, it would be a matter of 
glueing existing work, the majority of which done already.

Thank you for your time.
Best regards

Seb.


Hi Seb,

I'm not really sure of the status of the PC CS4231 driver in Linux and/or QEMU, so a 
good place to start would be to test it with the latest QEMU. If that works (and it 
is shown that the SS-5 and PC drivers in the kernel can be combined), then yes, in 
theory it should be possible to make use of some of the existing code.



ATB,

Mark.