Hi, >> I don't think an explicit handshake via >> QXL_IO_CLIENT_MONITORS_CONFIG_DONE is a good idea. > > Why? I don't see the below as being better - it just moves the checking to > the guest, and racily.
It is more robust. We don't have to keep state in qxl for the handshake, one less opportunity for a buggy guest driver to screw up things. It is also closer to how real hardware handles this. Yes, there is a race, but we detect and handle that case, so it is no problem. >> >> How about this update protocol: >> >> qemu: >> (1) set QXLRom::client_monitors_config_updating >> (2) fill QXLRom::client_monitors_config >> (3) raise QXL_INTERRUPT_CLIENT_MONITORS_CONFIG >> (4) clear QXLRom::client_monitors_config_updating >> >> guest: >> (1) clear QXL_INTERRUPT_CLIENT_MONITORS_CONFIG bit in irq status >> (2) wait until QXLRom::client_monitors_config_updating is clear >> (3) parse QXLRom::client_monitors_config >> (4) check QXL_INTERRUPT_CLIENT_MONITORS_CONFIG bit in irq status >> (a) when set, goto (1). >> (b) when clear we are done. >> > > Hmm, you are making the guest more complicated instead of the host > side, don't know if that's better. > > Point (2) is a busy wait, no? The guest doesn't have to spin, it can yield(). But yes, better don't do that in a IRQ handler. > Also, guest will have to handle half old / half new configurations: It should not. > qemu(1) > qemu(2) start > guest(1) > guest(2) > guest(3) reads half old half new No, guest will wait at (2). But I just see it can happen the other way around though: guest starts reading and qemu starts updating in parallel. We need one additional step (3a): The guest needs to check QXLRom::client_monitors_config_updating is clear after parsing the data. When qemu updated the data while the guest was reading it one of the two conditions will be true: either (3a) if qemu is still busy updating or (4) if qemu finished updating. >> We might want to notify spice-server when the guest flips the >> QXL_INTERRUPT_CLIENT_MONITORS_CONFIG bit in the irq mask, so we can >> route the event accordingly. > > That would mean even more complex logic, to delay spice-server from > sending the monitors command to the guest while more data could be > incoming (which can be multiple chunks comprising one message, so we > must wait for it to complete before pushing the VDAgentMonitorsConfig > message). I don't understand. Why is the client capability bit fundamentally different from the irq mask? I'd expect a guest driver which supports it would set the irq mask bit and the capability bit at the same time, no? >> Or we just route it unconditionally both ways and let the guest >> sort it (i.e. make vdagent ignore monitors config when the qxl kms >> driver is active). > Routing it only one way is simpler in spice code. In other words, I > had a buggy version doing that and decided that I should just do it > one way or the other to not bang my head on it. But it's also simpler > to handle - what order are the events going to happen in the guest? > Also, not only spice-vdagent plus our driver, but with, for instance, > gnome-settings-daemon listening to the uevent from the kernel it will > do modesetting by itself, racing with spice-vdagent. I was thinking about spice-vdagent detecting the qxl kms driver is active, then just ignore all VDAgentMonitorsConfig messages unconditionally if this happens to be the case, so there will be no race. But if you think it is too complicated, no problem. Was just an idea. Notifying spice-server which way to route is fine with me too. cheers, Gerd