ers
Subject: Re: Virtual machines volume lock manager
I would advise trying to reproduce.
start migration, then either:
- configure timeout so that it''s way too low, so that migration fails due to
timeouts.
- restart mgmt server in the middle of migrations This should cause migration
to
We have similar discussion before actually.
see PR https://github.com/apache/cloudstack/pull/2722 and PR
https://github.com/apache/cloudstack/pull/2984
We have made similar changes as describe in PR 2722. It caused duplicated
vms.
The change in PR 2984 (same behavior in old cloudstack versions) is
true, true... Forgot these cases while I was running KVM.
Check if that VM is using a compute offering which is marked as "HA
enabled" - and if YES< then Wei is 100% right (you can confirm this from
logs - checking for info on starting that VM on specific hypervisor etc)
THough, IF doing live migr
Hi Rakesh,
The duplicated VM is not caused by migration, but by HA.
-Wei
On Wed, 30 Oct 2019 at 11:31, Rakesh Venkatesh
wrote:
> Hi Andrija
>
>
> Sorry for the late reply.
>
> Im using 4.7 version of ACS. Qemu version 1:2.5+dfsg-5ubuntu10.40
>
> Im not sure if ACS job failed or libvirt job as
I would advise trying to reproduce.
start migration, then either:
- configure timeout so that it''s way too low, so that migration fails due
to timeouts.
- restart mgmt server in the middle of migrations
This should cause migration to fail - and you can observe if you have
reproduced the problem.
Hi Andrija
Sorry for the late reply.
Im using 4.7 version of ACS. Qemu version 1:2.5+dfsg-5ubuntu10.40
Im not sure if ACS job failed or libvirt job as I didnt see into logs.
Yes the vm will be in paused state during migration but after the failed
migration, the same vm was in "running" state on
I've been running KVM public cloud up to recently and have never seen such
behaviour.
What versions (ACS, qemu, libvrit) are you running?
How does the migration fail - ACS job - or libvirt job?
destination VM is by default always in PAUSED state, until the migration is
finished - only then the de
Hello Users
Recently we have seen cases where when the Vm migration fails, cloudstack
ends up running two instances of the same VM on different hypervisors. The
state will be "running" and not any other transition state. This will of
course lead to corruption of disk. Does CloudStack has any opti