On Tue, 31 Jan 2023 at 16:19, Marc-André Lureau <marcandre.lur...@gmail.com> wrote: > > Hi > > On Wed, Feb 1, 2023 at 12:29 AM Stefan Hajnoczi <stefa...@gmail.com> wrote: > > > > On Tue, 31 Jan 2023 at 14:48, Marc-André Lureau > > <marcandre.lur...@gmail.com> wrote: > > > > > > Hi > > > > > > On Tue, Jan 31, 2023 at 10:20 PM Stefan Hajnoczi <stefa...@gmail.com> > > > wrote: > > > > > > > > On Tue, 31 Jan 2023 at 12:43, Alex Bennée <alex.ben...@linaro.org> > > > > wrote: > > > > > > > > > > > > > > > Stefan Hajnoczi <stefa...@gmail.com> writes: > > > > > > > > > > > On Sun, 29 Jan 2023 at 17:10, Stefan Hajnoczi <stefa...@gmail.com> > > > > > > wrote: > > > > > >> > > > > > >> Hi Shreyansh, Gerd, and Laurent, > > > > > >> The last virtio-sound RFC was sent in February last year. It was a > > > > > >> spare time project. Understandably it's hard to complete the whole > > > > > >> thing on weekends, evenings, etc. So I wanted to suggest > > > > > >> relaunching > > > > > >> the virtio-sound effort as a Google Summer of Code project. > > > > > >> > > > > > >> Google Summer of Code is a 12-week full-time remote work > > > > > >> internship. > > > > > >> The intern would be co-mentored by some (or all) of us. The project > > > > > >> goal would be to merge virtio-sound with support for both playback > > > > > >> and > > > > > >> capture. Advanced features for multi-channel audio, etc can be > > > > > >> stretch > > > > > >> goals. > > > > > >> > > > > > >> I haven't looked in detail at the patches from February 2022, so I > > > > > >> don't know the exact state and whether there is enough work > > > > > >> remaining > > > > > >> for a 12-week internship. What do you think? > > > > > > > > > > > > Adding Anton. > > > > > > > > > > > > I have updated the old wiki page for this project idea and added it > > > > > > to > > > > > > the 2023 ideas list: > > > > > > https://wiki.qemu.org/Internships/ProjectIdeas/VirtioSound > > > > > > > > > > > > Please let me know if you wish to co-mentor this project! > > > > > > > > > > I'd be happy to help - although if someone was rust inclined I'd also > > > > > be > > > > > happy to mentor a rust-vmm vhost-user implementation of VirtIO sound. > > > > > > > > Maybe Gerd can tell us about the QEMU audio subsystem features that > > > > may be lost if developing a standalone vhost-user device. > > > > > > > > Two things come to mind: > > > > 1. May not run on all host OSes that QEMU supports if it supports > > > > fewer native audio APIs than QEMU. > > > > > > Using GStreamer in Rust is well supported, and should give all the > > > backends that you ever need (alternatively, there might be some Rust > > > audio crates that I am not aware of). In all cases, I would not > > > implement various backends the way QEMU audio/ has grown... > > > > > > > 2. May not support forwarding audio to remote desktop solutions that > > > > stream audio over the network. I don't know if/how this works with > > > > VNC/RDP/Spice, but a separate vhost-user process will need to do extra > > > > work to send the audio over the remote desktop connection. > > > > > > Well, some of the goal with `-display dbus` is to move the remote > > > desktop handling outside of QEMU. I had in mind that the protocol will > > > have to evolve to handle multiprocess, so audio, display, input etc > > > interfaces can be provided by external processes. In fact, it should > > > be possible without protocol change for audio devices with the current > > > interface > > > (https://gitlab.com/qemu-project/qemu/-/blob/master/ui/dbus-display1.xml#L483). > > > > > > In short, I wish the project implements the device in Rust, with > > > `gstreamer` and `dbus` as optional features. (that should be > > > introspectable via --print-capabilities stuff) > > > > Cool, then let's go with a Rust vhost-user device implementation! > > > > Can you elaborate on how the "gstreamer" feature would be used by the > > process launching the vhost-user back-end? Do you mean there should be > > a standard command-line syntax for specifying the playback and capture > > devices that maps directly to GStreamer (e.g. like gst-launch-1.0)? > > Roughly what comes in mind is that the backend should always offer a > --audio-backend=... option, defaulting to something sensible, and > always have `none`, I guess. > - when the `gstreamer` feature & capability is available, can be set > to 'gstreamer'. Additionally, options like --gst-sink='pipeline' > --gst-src='pipeline' could be supported too, but it should do > something sensible here as well, by using autoaudiosink/autoaudiosrc > by default. > - when the `dbus` feature & capability is available, can be set to > 'dbus' (or qemu-dbus?). It may require some extra option too, to > communicate back with qemu, such as `--dbus-addr=addr`, or > `--dbus-fd=N`.
I see. Thanks for explaining. I have updated the project idea now. Feel free to make edits or add yourselves as mentors: https://wiki.qemu.org/Internships/ProjectIdeas/VirtioSound Stefan