> >> +void vnc_job_push(VncJob *job) > >> +{ > >> + vnc_lock_queue(queue); > >> + if (QLIST_EMPTY(&job->rectangles)) { > >> + qemu_free(job); > > > > No need to lock if you get into the "then" block. > > I locked it because the main thread can try to push a job while a > consumer is removing one, so I can't call QLIST_EMPTY() without > locking the queue.
You're obviously right. >>> + qemu_mutex_unlock(&job->vs->output_mutex); >>> + >>> + if (job->vs->csock != -1 && job->vs->abording != true) { >>> + vnc_flush(job->vs); >>> + } >>> + >> >> You're accessing the abort flag outside the mutex here. Also, you are not >> using vnc_{,un}lock_output. > > I assumed that bool (int) where atomic .. but you're right I should lock that. They are, however: 1) if you access them outside a mutex you have to think about whether you need memory barriers and whether there are races; 2) since you already own the mutex and you're just keeping it a bit longer, it costs basically nothing. BTW, with the same reasoning you could avoid taking the mutex altogether in vnc_abort_display_jobs (but I think it's better to keep it). Also, I took a look at the code again and I noticed this: >>> + if (job->vs->csock == -1) { >>> + goto disconnected; >>> + } The "goto" is jumping over the unlocking of &job->vs->mutex. You only want a "break;" I think. >> static void vnc_disconnect_finish(VncState *vs) >> { >> + vnc_jobs_join(vs); /* Wait encoding jobs */ >> + vnc_lock(vs); > > Possibly racy? Maybe you have to set the aforementioned new flag > queue->exit at the beginning of vnc_jobs_join, and refuse new jobs if it is > set. > > vnc_disconnect_finish can only be called by the main thread, I don't > see how this could be racy, any hint ? I was thinking of someone queuing jobs between the end of vnc_jobs_join and the time the vnc_lock is taken. But indeed jobs cannot be queued at this time because vnc_refresh can only be called from the same thread. So this is correct. >> Also, if anything waits on the same vs in vnc_refresh while you own it in >> vnc_disconnect_finish, as soon as you unlock they'll have a dangling >> pointer. (After you unlock the mutex the OS wakes the thread, but then >> pthread_mutex_lock has to check again that no one got the lock in the >> meanwhile; so QTAILQ_FOREACH_SAFE is not protecting you). Probably it's >> better to use a single lock on vd->clients instead of one lock per VncState. Same here. No race because everything happens in the main thread. Paolo