Juan Quintela <quint...@redhat.com> writes:

> Fabiano Rosas <faro...@suse.de> wrote:
>> Juan Quintela <quint...@redhat.com> writes:
>>
>>> Fabiano Rosas <faro...@suse.de> wrote:
>>>> The channels_ready semaphore is a global variable not linked to any
>>>> single multifd channel. Waiting on it only means that "some" channel
>>>> has become ready to send data. Since we need to address the channels
>>>> by index (multifd_send_state->params[i]), that information adds
>>>> nothing of value.
>>>
>>> NAK.
>>>
>>> I disagree here O:-)
>>>
>>> the reason why that channel exist is for multifd_send_pages()
>>>
>>> And simplifying the function what it does is:
>>>
>>> sem_wait(channels_ready);
>>>
>>> for_each_channel()
>>>    look if it is empty()
>>>
>>> But with the semaphore, we guarantee that when we go to the loop, there
>>> is a channel ready, so we know we donat busy wait searching for a
>>> channel that is free.
>>>
>>
>> Ok, so that clarifies the channels_ready usage.
>>
>> Now, thinking out loud... can't we simply (famous last words) remove the
>> "if (!p->pending_job)" line and let multifd_send_pages() prepare another
>> payload for the channel? That way multifd_send_pages() could already
>> return and the channel would see one more pending_job and proceed to
>> send it.
>
> No.
>
> See the while loop in multifd_send_thread()
>
>     while (true) {
>         qemu_mutex_lock(&p->mutex);
>
>         if (p->pending_job) {
>
>             ......
>             Do things with parts of the struct that are shared with the
>             migration thread
>             ....
>             qemu_mutex_unlock(&p->mutex);
>
>             // Drop the lock
>             // Do mothing things on the channel, pending_job means that
>             // it is working
>             // mutex dropped means that migration_thread can use the
>             // shared variables, but not the channel
>
>             // now here we decrease pending_job, so main thread can
>             // change things as it wants
>             // But we need to take the lock first.
>             qemu_mutex_lock(&p->mutex);
>             p->pending_job--;
>             qemu_mutex_unlock(&p->mutex);
>             ......
>         }
>     }
>
> This is a common pattern for concurrency.  To not have your mutex locked
> too long, you put a variable (that can only be tested/changed with the
> lock) to explain that the "channel" is busy, the struct that lock
> protects is not (see how we make sure that the channel don't use any
> variable of the struct without the locking).

Sure, but what purpose is to mark the channel as busy? The migration
thread cannot access the p->packet anyway. From multifd_send_pages()
perspective, as soon as the channel releases the lock to start with the
IO, the packet has been sent. It could start preparing the next pages
struct while the channel is doing IO. No?

We don't touch p after the IO aside from p->pending_jobs-- and we
already distribute the load uniformly by incrementing next_channel.

I'm not saying this would be more performant, just wondering if it would
be possible.

>
>
>> Or, since there's no resending anyway, we could dec pending_jobs earlier
>> before unlocking the channel. It seems the channel could be made ready
>> for another job as soon as the packet is built and the lock is released.
>
> pending_jobs can be transformed in a bool.  We just need to make sure
> that we didn't screw it in _sync().
>
>> That way we could remove the semaphore and let the mutex do the job of
>> waiting for the channel to become ready.
>
> As said, we don't want that.  Because channels can go a different speeds
> due to factors outside of our control.
>
> If the semaphore bothers you, you can change it to to a condition
> variable, but you just move the complexity from one side to the other
> (Initial implementation had a condition variable, but Paolo said that
> the semaphore is more efficient, so he won)

Oh, it doesn't bother me. I'm just trying to unequivocally understand
it's effects. And if it logically follows that it's not necessary, only
then remove it.


Reply via email to