On Sat, Dec 12, 2009 at 09:03:26AM -0600, Anthony Liguori wrote: > Spice has been closed source for a long time. For those that have been > involved with Spice development, I'm sure you understand very well why > it's so wonderful, but for the rest of us, Spice didn't exist until > yesterday so it's going to take a little bit for us to all understand > what actually about it makes it special.
I personally wasn't involved but I've seen it running with video fullscreen and I don't know the comparison with rdesktop as I never used it but I know the comparsion with vnc too well ;). It has to get even faster and avoid decoding the compressed stream on the server. > project? You complain about VNC's extensibility, but so far, we have no > idea whether it's even possible to extend Spice. Given the interactions > so far, I'm a little concerned about how well we can influence the protocol. Well this is open source, so you know, anybody can change it, fork it, takeover it, simply. And it's in the interest of everyone to help in converging on the best SPICE protocol. > If spice really needs to be able to evolve on it's own, what would it > take for spice to be implementable from an external process? What level > of interaction does it need with qemu? As long as we can prevent any > device state from escaping from qemu, I'd be very interested in a model > where spice lived entirely in a separate address space. Funny thing is I think it's already in a separate process! ;) (you know it wasn't open source until recently so...), but now that you mention it, I think it should be changed to not be in a separate process... Linux is monolithic, KVM is the monolithic way (xen is the microkernel slow way), I think you don't need to worry about sharing spice libs in the same address space. We want to make it as fast as feasible on the server side. A separate address space forces tlb flushes, mm switches and screw performance and spice is all about performance. We've already a pipe to connect server and client, we should try to avoid having two pipes as much as possible and have a single exit to qemu spice ring and send directly to spice client with virtio instead of sending to separate process that eventually sends to remote spice client. Like 99% of microkernel designs they're very wasteful, what good it does that the network is still up and running when you lost access to your harddisk because the sata driver crashed? Or to have access to sata disk but network driver crashed and you can't reach the system anymore. Yeah there are corner cases where they can be useful but those won't use linux kernel or monolitich kvm design in the first place and they will run dogslow and they'll demonstrate math correctness all software running on the bare metal with math which means you can't patch the software anymore until math people recomputes. We're not in that environment (luckily). In this case keeping it separate means the desktop remains reachable through network but virtual graphics card, virtual mouse and stuff crashed, if somebody uses qlx that means the VM has to be rebooted anyway. Ok, it won't risk to create disk corruption but in practice the slowdown it creates it's not worth paying compared to the minor risk that you really end up corrupting a bit in qcow2 cluster bitmap in a way that doesn't crash the VM but silently corrupts data. If something if I had to pay slowdown for higher reliability of disk I/O it would be much better to move qcow2 and the I/O stack to its own address space so it's protected from all the rest too... plus qcow2 is a lot less performance critical than network graphics! Plus there's valgrind and all that userland stuff to trap memory corruption, orders of magnitude easier to debug than kernel random corruption (that over time we fix too and so we keep run at max speed). Having the thing modular at runtime not only at ./configure time (loadable dynamically into qemu address space) OTOH is great idea, so you can disable the module and verify the crashes go away without having to rebuild.