Hi,
I tested the reliability of qemu in the IPSAN environment as follows:
(1) create one VM on a X86 server which is connected to an IPSAN, and
the VM has only one system volume which is on the IPSAN;
(2) disconnect the network between the server and the IPSAN. On the
server, I have a "multipat
Hi,
I tested the reliability of qemu in the IPSAN environment as follows:
(1) create one VM on a X86 server which is connected to an IPSAN, and
the VM has only one system volume which is on the IPSAN;
(2) disconnect the network between the server and the IPSAN. On the
server, I have a "multipat
On 2014/8/11 22:21, Stefan Hajnoczi wrote:
On Mon, Aug 11, 2014 at 04:33:21PM +0800, Bin Wu wrote:
Hi,
I tested the reliability of qemu in the IPSAN environment as follows:
(1) create one VM on a X86 server which is connected to an IPSAN, and the VM
has only one system volume which is on the
}
...
}
In the sentence "vring_avail_event(vq, vring_avail_idx(vq));", I think the
"avail" event idx should equal to the number of requests have been
taken(vq->last_avail_idx), not the number of all available requests
(vring_avail_idx(vq)). Is there any special consideration or do I just
misunderstand the event idx?
thanks
Bin Wu
tatement"vring_avail_event(vq, vring_avail_idx(vq));", I think the
"avail" event idx should equal to the number of requests have been
taken(vq->last_avail_idx), not the number of all available requests
(vring_avail_idx(vq)). Is there any special consideration or do I just
misunderstand the event idx?
thanks
--
Bin Wu
d to
zero). Therefore, we need to use the "type" variable to judge the case.
Signed-off-by: Bin Wu
---
hw/scsi/virtio-scsi.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index a1725b8..5742d39 100644
--- a/hw/s
tification.
In virtqueue_pop, when a request is poped, the current avail event
idx should be set to the number of vq->last_avail_idx.
Signed-off-by: Bin Wu
---
hw/virtio/virtio.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 2c236
On 2014/10/28 13:32, Michael S. Tsirkin wrote:
> On Tue, Oct 28, 2014 at 02:13:02AM +0000, Bin Wu wrote:
>> The event idx in virtio is an effective way to reduce the number of
>> interrupts and exits of the guest. When the guest puts an request
>> into the virtio ring, it doe
tification.
In virtqueue_pop, when a request is poped, the current avail event
idx should be set to the number of vq->last_avail_idx.
Signed-off-by: Bin Wu
---
V2 -> V1:
update the same code in hw/virtio/dataplane/vring.c (Stefan)
---
hw/virtio/dataplane/vring.c | 8
hw/virtio/virtio.c
On 2014/10/31 0:48, Stefan Hajnoczi wrote:
> On Tue, Oct 28, 2014 at 02:13:02AM +0000, Bin Wu wrote:
>> The event idx in virtio is an effective way to reduce the number of
>> interrupts and exits of the guest. When the guest puts an request
>> into the virtio ring, it doesn&
On 2014/9/10 13:59, Fam Zheng wrote:
v5: Fix IDE callback. (Paolo)
Fix blkdebug. (Paolo)
Drop the DMA fix which is independent of this series. (Paolo)
Incorperate Yuan's patch on quorum_aio_cancel. (Benoît)
Commit message wording fix. (Benoît)
Rename qemu_aio_release to q
From: Bin Wu
We tested VMs migration with their disk images by drive_mirror. With
migration, two VMs copyed large files between each other. During the
test, a segfault occured. The stack was as follow:
(gdb) bt
#0 0x7fa5a0c63fc5 in qemu_co_queue_run_restart (co=0x7fa5a1798648) at
qemu
From: Bin Wu
The error scenario is as follow: coroutine C1 enters C2, C2 yields
back to C1, then C1 ternimates and the related coroutine memory
becomes invalid. After a while, the C2 coroutine is entered again.
At this point, C1 is used as a parameter passed to
qemu_co_queue_run_restart
sorry, there is a mistake in this patch: the "ret" variable is not
defined :<
I will send a new patch to fix this problem.
On 2015/2/9 12:09, Bin Wu wrote:
> From: Bin Wu
>
> The error scenario is as follow: coroutine C1 enters C2, C2 yields
> back to C1, then C1 t
From: Bin Wu
We tested VMs migration with their disk images by drive_mirror. With
migration, two VMs copyed large files between each other. During the
test, a segfault occured. The stack was as follow:
(gdb) bt
qemu-coroutine-lock.c:66
to=0x7fa5a1798648) at qemu-coroutine.c:97
request
On 2015/2/9 16:12, Fam Zheng wrote:
> On Sat, 02/07 17:51, w00214312 wrote:
>> From: Bin Wu
>>
>> When a coroutine holds a lock, other coroutines who want to get
>> the lock must wait on a co_queue by adding themselves to the
>> CoQueue. However, if a waiti
On 2015/2/9 17:23, Paolo Bonzini wrote:
>
>
> On 07/02/2015 10:51, w00214312 wrote:
>> From: Bin Wu
>>
>> When we test the drive_mirror between different hosts by ndb devices,
>> we find that, during the cancel phase the qemu process crashes sometimes.
>&
On 2015/2/9 22:48, Stefan Hajnoczi wrote:
> On Mon, Feb 09, 2015 at 02:50:39PM +0800, Bin Wu wrote:
>> From: Bin Wu
>>
>> We tested VMs migration with their disk images by drive_mirror. With
>> migration, two VMs copyed large files between each other. During the
>&
On 2015/2/9 17:09, Paolo Bonzini wrote:
>
>
> On 09/02/2015 07:50, Bin Wu wrote:
>> From: Bin Wu
>>
>> We tested VMs migration with their disk images by drive_mirror. With
>> migration, two VMs copyed large files between each other. During the
>> tes
On 2015/2/9 18:12, Kevin Wolf wrote:
> Am 09.02.2015 um 10:36 hat Bin Wu geschrieben:
>> On 2015/2/9 16:12, Fam Zheng wrote:
>>> On Sat, 02/07 17:51, w00214312 wrote:
>>>> From: Bin Wu
>>>>
>>>> When a coroutine holds a lock, other coroutines
On 2015/2/10 11:16, Wen Congyang wrote:
> On 02/09/2015 10:48 PM, Stefan Hajnoczi wrote:
>> On Mon, Feb 09, 2015 at 02:50:39PM +0800, Bin Wu wrote:
>>> From: Bin Wu
>>>
>>> We tested VMs migration with their disk images by drive_mirror. With
>>> migra
From: Bin Wu
We tested VMs migration with their disk images by drive_mirror. With
migration, two VMs copyed large files between each other. During the
test, a segfault occured. The stack was as follow:
00) 0x7fa5a0c63fc5 in qemu_co_queue_run_restart (co=0x7fa5a1798648) at
qemu-coroutine
On 2015/2/9 17:23, Paolo Bonzini wrote:
>
>
> On 07/02/2015 10:51, w00214312 wrote:
>> From: Bin Wu
>>
>> When we test the drive_mirror between different hosts by ndb devices,
>> we find that, during the cancel phase the qemu process crashes sometimes.
>&
From: Bin Wu
When we tested the VM migartion between different hosts with NBD
devices, we found if we sent a cancel command after the drive_mirror
was just started, a coroutine re-enter error would occur. The stack
was as follow:
(gdb) bt
00) 0x7fdfc744d885 in raise () from /lib64/libc.so
On 2015/2/10 18:32, Kevin Wolf wrote:
> Am 10.02.2015 um 06:16 hat Bin Wu geschrieben:
>> From: Bin Wu
>>
>> We tested VMs migration with their disk images by drive_mirror. With
>> migration, two VMs copyed large files between each other. During the
>> test, a s
From: Bin Wu
Signed-off-by: Bin Wu
---
block/mirror.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/block/mirror.c b/block/mirror.c
index 4056164..08372df 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -530,7 +530,9 @@ static void coroutine_fn mirror_run(void *opaque
On 2015/4/1 16:19, Fam Zheng wrote:
> On Wed, 04/01 12:42, Bin Wu wrote:
>> From: Bin Wu
>
> What's the issue are you fixing? I think the coroutine already is running in
> the AioContext of bs.
>
> Fam
>
In the current implementation of bdrv_drain, it should be
On 2015/4/1 19:59, Stefan Hajnoczi wrote:
> On Wed, Apr 01, 2015 at 04:49:39PM +0800, Bin Wu wrote:
>>
>> On 2015/4/1 16:19, Fam Zheng wrote:
>>> On Wed, 04/01 12:42, Bin Wu wrote:
>>>> From: Bin Wu
>>>
>>> What's the issue are you
14 +348,18 @@ static inline void submit_requests(BlockBackend *blk,
MultiReqBuffer *mrb,
block_acct_merge_done(blk_get_stats(blk),
is_write ? BLOCK_ACCT_WRITE : BLOCK_ACCT_READ,
num_reqs - 1);
+} else {
+merged_request = mrb->reqs[start];
+qiov = &mrb->reqs[start]->qiov;
+nb_sectors = mrb->reqs[start]->qiov.size / BDRV_SECTOR_SIZE;
}
if (is_write) {
blk_aio_writev(blk, sector_num, qiov, nb_sectors,
- virtio_blk_rw_complete, mrb->reqs[start]);
+ virtio_blk_rw_complete, merged_request);
} else {
blk_aio_readv(blk, sector_num, qiov, nb_sectors,
- virtio_blk_rw_complete, mrb->reqs[start]);
+ virtio_blk_rw_complete, merged_request);
}
}
--
Bin Wu
Hi,
When IO error happens in physical device, qemu block layer supports error
reporting, error ignoring and error stoping(for example, virtio-blk). Can we
have any way to resend the error IO?
thanks
--
Bin Wu
On 2014/12/25 10:42, Fam Zheng wrote:
> On Thu, 12/25 09:57, Bin Wu wrote:
>> Hi,
>>
>> When IO error happens in physical device, qemu block layer supports error
>> reporting, error ignoring and error stoping(for example, virtio-blk). Can we
>> have any way to res
On 2014/12/25 15:19, Fam Zheng wrote:
> On Thu, 12/25 11:46, Bin Wu wrote:
>> On 2014/12/25 10:42, Fam Zheng wrote:
>>> On Thu, 12/25 09:57, Bin Wu wrote:
>>>> Hi,
>>>>
>>>> When IO error happens in physical device, qemu block layer supports
32 matches
Mail list logo