"Dr. David Alan Gilbert" <dgilb...@redhat.com> wrote:
> * Juan Quintela (quint...@redhat.com) wrote:
>> Either for quit, sync or packet, we first wake them.
>> 
>> Signed-off-by: Juan Quintela <quint...@redhat.com>
>> ---
>>  migration/ram.c | 13 +++++++++++--
>>  1 file changed, 11 insertions(+), 2 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 0f1340b4e3..21b448c4ed 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -754,6 +754,7 @@ static void *multifd_send_thread(void *opaque)
>>      p->num_packets = 1;
>>  
>>      while (true) {
>> +        qemu_sem_wait(&p->sem);
>>          qemu_mutex_lock(&p->mutex);
>>          multifd_send_fill_packet(p);
>>          if (p->quit) {
>> @@ -761,7 +762,9 @@ static void *multifd_send_thread(void *opaque)
>>              break;
>>          }
>>          qemu_mutex_unlock(&p->mutex);
>> -        qemu_sem_wait(&p->sem);
>> +        /* this is impossible */
>> +        error_setg(&local_err, "multifd_send_thread: Unknown command");
>> +        break;
>
> This error disappears in a later patch saying that you can have spurious
> wakeups.
>
>>      }
>>  
>>  out:
>> @@ -905,6 +908,7 @@ static void *multifd_recv_thread(void *opaque)
>>      trace_multifd_recv_thread_start(p->id);
>>  
>>      while (true) {
>> +        qemu_sem_wait(&p->sem);
>>          qemu_mutex_lock(&p->mutex);
>
> All this stuff seems to change again in later patches.

Tricky :-(

Except, for patches 18, 19 and 20, everything else is independent and
works in every interval.  18 and 19 don't work (for multifd).  So, I
have the option of:
- creating a bigger patch that is more difficult to understand (my
  humble opinion)
- or having it split logically but that they don't work.

The real problem is that before we really send data through the
channels, we synchronize on one sem (that is reception).  After we start
sending data through the channels, reception synchronizes in one
read().  "faking" synchronizations on reads() while you are not reading
is "interesting".

Later, JUan.

Reply via email to