On Wed, May 16, 2018 at 5:36 PM, 858585 jemmy <jemmy858...@gmail.com> wrote: > On Tue, May 15, 2018 at 10:54 PM, Paolo Bonzini <pbonz...@redhat.com> wrote: >> On 05/05/2018 16:35, Lidong Chen wrote: >>> @@ -2635,12 +2637,20 @@ static ssize_t qio_channel_rdma_writev(QIOChannel >>> *ioc, >>> { >>> QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(ioc); >>> QEMUFile *f = rioc->file; >>> - RDMAContext *rdma = rioc->rdma; >>> + RDMAContext *rdma; >>> int ret; >>> ssize_t done = 0; >>> size_t i; >>> size_t len = 0; >>> >>> + rcu_read_lock(); >>> + rdma = atomic_rcu_read(&rioc->rdmaout); >>> + >>> + if (!rdma) { >>> + rcu_read_unlock(); >>> + return -EIO; >>> + } >>> + >>> CHECK_ERROR_STATE(); >>> >>> /* >> >> I am not sure I understand this. It would probably be wrong to use the >> output side from two threads at the same time, so why not use two mutexes? > > Two thread will not invoke qio_channel_rdma_writev at the same time. > The source qemu, migration thread only use writev, and the return path > thread only > use readv. > The destination qemu already have a mutex mis->rp_mutex to make sure > not use writev > at the same time. > > The rcu_read_lock is used to protect not use RDMAContext when another > thread closes it.
Any suggestion? > >> >> Also, who is calling qio_channel_rdma_close in such a way that another >> thread is still using it? Would it be possible to synchronize with the >> other thread *before*, for example with qemu_thread_join? > > The MigrationState structure includes to_dst_file and from_dst_file > QEMUFile, the two QEMUFile use the same QIOChannel. > For example, if the return path thread call > qemu_fclose(ms->rp_state.from_dst_file), > It will also close the RDMAContext for ms->to_dst_file. > > For live migration, the source qemu invokes qemu_fclose in different > threads include main thread, migration thread, return path thread. > > The destination qemu invokes qemu_fclose in main thread, listen thread and > COLO incoming thread. > > I do not find an effective way to synchronize these threads. > > Thanks. > >> >> Thanks, >> >> Paolo