On Wed, Sep 18, 2019 at 05:07:23PM +0200, Lentes, Bernd wrote:
Hi,
Hello,
i have atwo node HA-cluster with pacemaker, corosync, libvirt and KVM.
Recently i configured a new VirtualDomain which runs fine, but live Migration
does not succeed.
This is the error:
VirtualDomain(vm_snipanalysis)
Hi,
i have atwo node HA-cluster with pacemaker, corosync, libvirt and KVM.
Recently i configured a new VirtualDomain which runs fine, but live Migration
does not succeed.
This is the error:
VirtualDomain(vm_snipanalysis)[14322]: 2019/09/18_16:56:54 ERROR:
snipanalysis: live migration to ha-idg
On Fri, Oct 12, 2018 at 4:50 AM Martin Kletzander
wrote:
> On Mon, Sep 17, 2018 at 02:17:39PM +0200, Fabian Deutsch wrote:
> >On Fri, Sep 14, 2018 at 6:55 PM David Vossel wrote:
> >> Any chance we can get the safety check removed for the next Libvirt
> >> release? Does there need to be an issue
On Mon, Sep 17, 2018 at 02:17:39PM +0200, Fabian Deutsch wrote:
On Fri, Sep 14, 2018 at 6:55 PM David Vossel wrote:
Any chance we can get the safety check removed for the next Libvirt
release? Does there need to be an issue opened to track this?
Regardless of Martin's answer :): Please file
Hello!
We tested this and problem is not in libvirt, but in pacemaker's
virtualdomain.
Thank you!
13.09.2018 10:35, Dmitry Melekhov пишет:
Hello!
After some mistakes yesterday we ( me and my colleague ) think that it
will be wise for libvirt to check config file existence on remote side
On Fri, Sep 14, 2018 at 6:55 PM David Vossel wrote:
>
>
> On Wed, Sep 12, 2018 at 6:59 AM, Martin Kletzander
> wrote:
>
>> On Mon, Sep 10, 2018 at 02:38:48PM -0400, David Vossel wrote:
>>
>>> On Wed, Aug 29, 2018 at 4:55 AM, Daniel P. Berrangé >> >
>>> wrote:
>>>
>>> On Tue, Aug 28, 2018 at 05:0
On Wed, Sep 12, 2018 at 6:59 AM, Martin Kletzander
wrote:
> On Mon, Sep 10, 2018 at 02:38:48PM -0400, David Vossel wrote:
>
>> On Wed, Aug 29, 2018 at 4:55 AM, Daniel P. Berrangé
>> wrote:
>>
>> On Tue, Aug 28, 2018 at 05:07:18PM -0400, David Vossel wrote:
>>> > Hey,
>>> >
>>> > Over in KubeVirt
14.09.2018 16:15, Jiri Denemark пишет:
On Fri, Sep 14, 2018 at 16:00:43 +0400, Dmitry Melekhov wrote:
14.09.2018 15:43, Jiri Denemark пишет:
On Thu, Sep 13, 2018 at 19:37:00 +0400, Dmitry Melekhov wrote:
13.09.2018 18:57, Jiri Denemark пишет:
On Thu, Sep 13, 2018 at 18:38:57 +0400, Dmitry Mel
On Fri, Sep 14, 2018 at 16:00:43 +0400, Dmitry Melekhov wrote:
> 14.09.2018 15:43, Jiri Denemark пишет:
> > On Thu, Sep 13, 2018 at 19:37:00 +0400, Dmitry Melekhov wrote:
> >>
> >> 13.09.2018 18:57, Jiri Denemark пишет:
> >>> On Thu, Sep 13, 2018 at 18:38:57 +0400, Dmitry Melekhov wrote:
> 13.
14.09.2018 15:43, Jiri Denemark пишет:
On Thu, Sep 13, 2018 at 19:37:00 +0400, Dmitry Melekhov wrote:
13.09.2018 18:57, Jiri Denemark пишет:
On Thu, Sep 13, 2018 at 18:38:57 +0400, Dmitry Melekhov wrote:
13.09.2018 17:47, Jiri Denemark пишет:
On Thu, Sep 13, 2018 at 10:35:09 +0400, Dmitry Me
On Thu, Sep 13, 2018 at 19:37:00 +0400, Dmitry Melekhov wrote:
>
>
> 13.09.2018 18:57, Jiri Denemark пишет:
> > On Thu, Sep 13, 2018 at 18:38:57 +0400, Dmitry Melekhov wrote:
> >>
> >> 13.09.2018 17:47, Jiri Denemark пишет:
> >>> On Thu, Sep 13, 2018 at 10:35:09 +0400, Dmitry Melekhov wrote:
> >>
13.09.2018 18:57, Jiri Denemark пишет:
On Thu, Sep 13, 2018 at 18:38:57 +0400, Dmitry Melekhov wrote:
13.09.2018 17:47, Jiri Denemark пишет:
On Thu, Sep 13, 2018 at 10:35:09 +0400, Dmitry Melekhov wrote:
After some mistakes yesterday we ( me and my colleague ) think that it
will be wise for
On Thu, Sep 13, 2018 at 18:38:57 +0400, Dmitry Melekhov wrote:
>
>
> 13.09.2018 17:47, Jiri Denemark пишет:
> > On Thu, Sep 13, 2018 at 10:35:09 +0400, Dmitry Melekhov wrote:
> >> After some mistakes yesterday we ( me and my colleague ) think that it
> >> will be wise for libvirt to check config
13.09.2018 17:47, Jiri Denemark пишет:
On Thu, Sep 13, 2018 at 10:35:09 +0400, Dmitry Melekhov wrote:
After some mistakes yesterday we ( me and my colleague ) think that it
will be wise for libvirt to check config file existence on remote side
Which config file?
VM config file, namely qemu.
On Thu, Sep 13, 2018 at 10:35:09 +0400, Dmitry Melekhov wrote:
> After some mistakes yesterday we ( me and my colleague ) think that it
> will be wise for libvirt to check config file existence on remote side
Which config file?
> and through error if not,
>
> before migrating, otherwise migrat
Hello!
After some mistakes yesterday we ( me and my colleague ) think that it
will be wise for libvirt to check config file existence on remote side
and through error if not,
before migrating, otherwise migration will fail and VM fs can be
damaged, because it is sort of remove of power plug
On Mon, Sep 10, 2018 at 02:38:48PM -0400, David Vossel wrote:
On Wed, Aug 29, 2018 at 4:55 AM, Daniel P. Berrangé
wrote:
On Tue, Aug 28, 2018 at 05:07:18PM -0400, David Vossel wrote:
> Hey,
>
> Over in KubeVirt we're investigating a use case where we'd like to
perform
> a live migration within
On Wed, Aug 29, 2018 at 4:55 AM, Daniel P. Berrangé
wrote:
> On Tue, Aug 28, 2018 at 05:07:18PM -0400, David Vossel wrote:
> > Hey,
> >
> > Over in KubeVirt we're investigating a use case where we'd like to
> perform
> > a live migration within a network namespace that does not provide
> libvirtd
Hey,
Over in KubeVirt we're investigating a use case where we'd like to perform
a live migration within a network namespace that does not provide libvirtd
with network access. In this scenario we would like to perform a live
migration by proxying the migration through a unix socket to a process in
On Tue, Aug 28, 2018 at 05:07:18PM -0400, David Vossel wrote:
> Hey,
>
> Over in KubeVirt we're investigating a use case where we'd like to perform
> a live migration within a network namespace that does not provide libvirtd
> with network access. In this scenario we would like to perform a live
>
Hi,
I would suspect that you don't run the same version on the hosts and
you hit an incompatibilty issue.
Another question: does the SSH login works without passphrase?
Marc-Aurèle
On Mon, 2017-11-06 at 08:43 -0500, Hans Knecht wrote:
> Sorry, ctrl entered.
>
> I’m hoping someone here can hel
Sorry, ctrl entered.
I'm hoping someone here can help me figure out what's going on here.
I've got two CentOS 7 hosts with shared storage that I'm trying to do live
migrations between and I'm running into an error with VMs that were
originally created on a CentOS 6 host and then moved the C
I'm hoping someone here can help me figure out what's going on here.
I've got two CentOS 7 hosts with shared storage that I'm trying to do live
migrations between and I'm running into an error with VMs that were
originally created on a CentOS 6 host and then moved the CentOS 7 hosts.
The CentOS
I’m interested with the modifications of live migration with all storage on
KVM .
I intend to ask, modification performed on which part of KVM?
Do storage blocks migrate sequentialy? I need information about it
___
libvirt-users mailing list
libvirt-user
On Fri, Apr 28, 2017 at 12:06:11PM +0200, Daniel Kučera wrote:
Hi Martin,
in the meantime, I've found a solution which I consider at least acceptable:
1. create zfs snapshot of domain disk (/dev/zstore/test-volume)
2. save original XML domain definition
3. create snapshot in libvirt like this
Hi Martin,
in the meantime, I've found a solution which I consider at least acceptable:
1. create zfs snapshot of domain disk (/dev/zstore/test-volume)
2. save original XML domain definition
3. create snapshot in libvirt like this:
virsh snapshot-create --xmlfile snap.xml --disk-only --no-metad
On Tue, Apr 04, 2017 at 12:04:42PM +0200, Daniel Kučera wrote:
Hi all,
Hi,
I caught your mail in my Spam folder for some reason, maybe the same
happened for others. I don't have that deep knowledge of the snapshots,
but I'm replying so that if someone else has it in Spam and they have
more i
Hi all,
I'm using ZFS on Linux block volumes as my VM storage and want to do live
migrations between hypervisors.
If I create ZFS snapshot of used volume on source host, send it do
destination host (zfs send/recv) and then run live migration with
VIR_MIGRATE_NON_SHARED_DISK
flag, the migration wo
On 04/03/2017 10:07 AM, Michael Hierweck wrote:
Hi all,
virsh checks whether a (live) migration is safe or unsafe. When a
migration is considered to be unsafe it is rejected unless the --unsafe
option is prodivided.
As a part of those checks virsh considers the cache settings for the
underlying
Hi all,
virsh checks whether a (live) migration is safe or unsafe. When a
migration is considered to be unsafe it is rejected unless the --unsafe
option is prodivided.
As a part of those checks virsh considers the cache settings for the
underlying storage resources. In this context only cache="no
I forgot to say: I'm using libvirt 1.2.15
> On 03 Mar 2016, at 11:06, Marc-Aurèle Brothier - Exoscale
> wrote:
>
> Hi!
>
> I'm testing the live migration on libvirt + KVM, the VMs are using non-shared
> local storage only. If I run a live migration with --copy-storage-full, the
> final dis
Hi!
I'm testing the live migration on libvirt + KVM, the VMs are using non-shared
local storage only. If I run a live migration with --copy-storage-full, the
final disk file on the remote host after the migration has a full blown size of
the specified value (10G) in my case, instead of the few
Hi,
I've done a git bisec between v1.2.16 and v1.2.17 to try to debug this
problem myself, and I found that the "problem" comes from the following
commit:
93a19e283e5a147d147d843838be63b6a68d qemu: migration: selective
block device migration
So it seems that I miss something :-).
This commi
Hi,
It seems that live migration using storage copy is broken since libvirt
1.2.17.
Here is the command line used to do the migration using virsh:
virsh migrate --live --p2p --persistent --undefinesource --copy
-storage-all d2b545d3-db32-48d3-b7fa-f62ff3a7fa18
qemu+tcp://dest/system
XML dump
...@redhat.com
[mailto:libvirt-users-boun...@redhat.com] On Behalf Of rock...@zhaoxin.com
Sent: Tuesday, May 12, 2015 12:20 PM
To: libvirt-users@redhat.com
Subject: [libvirt-users] Live Migration failure: An error occurred, but the
cause is unknown
Hi everyone,
I’m testing the new Openstack
Hi everyone,
I’m testing the new Openstack Kilo on Ubuntu-15.04 and hypervisor is KVM.
I can creat instance successfully , but live migration is always failed. Error
report like this (from nova-compute.log on a compute node):
2015-05-12 18:11:12.753 3641 INFO nova.virt.libvirt.driver [-] [instan
Michal Privoznik wrote:
> On 07.05.2015 11:26, rock...@zhaoxin.com wrote:
>
>> Hi everyone,
>>
>> I’m testing the new openstack kilo on ubuntu15.04 and hypervisor is xen4.5.
>> I can creat instance successfully , but live migration is always failed.
>> Error report like this:
>>
>> 2015-05-07 1
On 07.05.2015 11:26, rock...@zhaoxin.com wrote:
> Hi everyone,
>
> I’m testing the new openstack kilo on ubuntu15.04 and hypervisor is xen4.5.
> I can creat instance successfully , but live migration is always failed.
> Error report like this:
>
> 2015-05-07 10:47:22.135 1331 ERROR nova.virt.lib
Hi everyone,
I’m testing the new openstack kilo on ubuntu15.04 and hypervisor is xen4.5.
I can creat instance successfully , but live migration is always failed. Error
report like this:
2015-05-07 10:47:22.135 1331 ERROR nova.virt.libvirt.driver [-] [instance:
b1081b86-fdce-4fcc-82c4-51896de441
On Wed, Mar 04, 2015 at 10:54:21 +0100, Martin Klepáč wrote:
> Hi,
>
> I am using qemu hook script to track status of VM live migration - libvirt
> 1.2.9 as included in Debian Wheezy backports.
>
> I would like to be informed of the fact that VM migration has successfully
> finished as soon as po
Hi,
I am using qemu hook script to track status of VM live migration - libvirt
1.2.9 as included in Debian Wheezy backports.
I would like to be informed of the fact that VM migration has successfully
finished as soon as possible. Based on the steps below, when would that be
possible?
Sample VM m
On Tuesday 24 February 2015 17:47:11 you wrote:
> On Tuesday 24 February 2015 16:58:22 you wrote:
> > On 24.02.2015 16:10, Thomas Stein wrote:
> > > On Tuesday 24 February 2015 14:56:10 Michal Privoznik wrote:
> > >> On 24.02.2015 14:29, Thomas Stein wrote:
> > >>> Hola.
> > >>>
> > >>> Just tried
On Tuesday 24 February 2015 16:58:22 you wrote:
> On 24.02.2015 16:10, Thomas Stein wrote:
> > On Tuesday 24 February 2015 14:56:10 Michal Privoznik wrote:
> >> On 24.02.2015 14:29, Thomas Stein wrote:
> >>> Hola.
> >>>
> >>> Just tried a live migration after fixing the pc-q35-2.1 error. Now i
> >
On 24.02.2015 16:10, Thomas Stein wrote:
> On Tuesday 24 February 2015 14:56:10 Michal Privoznik wrote:
>> On 24.02.2015 14:29, Thomas Stein wrote:
>>> Hola.
>>>
>>> Just tried a live migration after fixing the pc-q35-2.1 error. Now i
>>> have new problem. It seems during live migration only the ra
On Tuesday 24 February 2015 14:56:10 Michal Privoznik wrote:
> On 24.02.2015 14:29, Thomas Stein wrote:
> > Hola.
> >
> > Just tried a live migration after fixing the pc-q35-2.1 error. Now i
> > have new problem. It seems during live migration only the ram gets
> > migrated. I use the following co
On 24.02.2015 14:29, Thomas Stein wrote:
> Hola.
>
> Just tried a live migration after fixing the pc-q35-2.1 error. Now i
> have new problem. It seems during live migration only the ram gets
> migrated. I use the following command.
>
> # virsh migrate --live domain qemu+ssh://newhost/system
> --
Hola.
Just tried a live migration after fixing the pc-q35-2.1 error. Now i
have new problem. It seems during live migration only the ram gets
migrated. I use the following command.
# virsh migrate --live domain qemu+ssh://newhost/system
--copy-storage-all --verbose --persistent
It works without
On Thu, Jan 22, 2015 at 22:40:59 +0400, Andrey Korolyov wrote:
> On Thu, Jan 22, 2015 at 9:11 PM, Xu (Simon) Chen wrote:
> > Hey folks,
> >
> > I am running libvirt 1.2.4 and qemu 2.1 on a 3.14.27 kernel. I've found that
> > live migrating a relatively large VM (16 cores and 64G ram) is taking
> >
If the guest is modifying memory faster then your network connection can
sync it, the migration will never finish.
I've worked around this in the past by running 'virsh suspend' on the
source host. This will temporarily stop the guest, and allow the
migration to finish.
On 1/22/2015 1:11 PM
On Thu, Jan 22, 2015 at 9:11 PM, Xu (Simon) Chen wrote:
> Hey folks,
>
> I am running libvirt 1.2.4 and qemu 2.1 on a 3.14.27 kernel. I've found that
> live migrating a relatively large VM (16 cores and 64G ram) is taking
> forever - close to 15 hours now, and still not done...
>
> With "lsof -i",
Hey folks,
I am running libvirt 1.2.4 and qemu 2.1 on a 3.14.27 kernel. I've found
that live migrating a relatively large VM (16 cores and 64G ram) is taking
forever - close to 15 hours now, and still not done...
With "lsof -i", I can see a connection is established from my source
hypervisor to a
On 10/09/2014 03:01 PM, Vasiliy Tolstov wrote:
> I have lvm based (thin pool) storage on local disks. I need to move
> vps from one vg on one disk to another. Does it possible to migrate to
> localhost with blockcopy migration to another vg?
>
> I'm understand that i can move lv from one vg to ano
I have lvm based (thin pool) storage on local disks. I need to move
vps from one vg on one disk to another. Does it possible to migrate to
localhost with blockcopy migration to another vg?
I'm understand that i can move lv from one vg to another, but i don't need that.
--
Vasiliy Tolstov,
e-mail
On 03/20/2014 01:43 PM, Faizul Bari wrote:
> Thanks Eric.
[please don't top-post on technical lists]
>
> So, I need to look at QEMU. Do you know which files/functions should I look
> at?
Here's a recent thread that is proposing to expose additional migration
statistics:
https://lists.gnu.org/ar
Thanks Eric.
So, I need to look at QEMU. Do you know which files/functions should I look
at?
--
Faiz
On Thu, Mar 20, 2014 at 12:41 PM, Eric Blake wrote:
> On 03/20/2014 10:05 AM, Faizul Bari wrote:
> > Hello,
> >
> > I have been trying to track different phases of a live migration
> proces
On 03/20/2014 10:05 AM, Faizul Bari wrote:
> Hello,
>
> I have been trying to track different phases of a live migration process. I
> am using libvirt with qemu-kvm. I am issuing migration commands using
> virsh.
>
> Now, I want to measure the time spent in each phase of live migration,
> e.g.,
Hello,
I have been trying to track different phases of a live migration process. I
am using libvirt with qemu-kvm. I am issuing migration commands using
virsh.
Now, I want to measure the time spent in each phase of live migration,
e.g., pre-copy and stop-copy. I stumbled upon the file qemu_driv
On 03/04/2014 11:12 AM, Pasquale Dir wrote:
> Hello,
> I'd like to know if this is an hypervisor related problem or a libvirt one.
>
> I did this experiment: on VM I started watching a video on youtube.
> While video was in progress I started migration.
> Migration did not complete until video was
Hello,
I'd like to know if this is an hypervisor related problem or a libvirt one.
I did this experiment: on VM I started watching a video on youtube.
While video was in progress I started migration.
Migration did not complete until video was not finished.
I did another experiment: I installed a
On 01/16/2014 09:23 AM, Joaquim Barrera wrote:
> Hello everyone,
>
> Can somebody point where I can find the code where libvirt makes the
> decision to complete a live migration?
Libvirt doesn't make the decision, qemu does. So you'll have to look in
the qemu code. Libvirt just reacts to the ch
Hello everyone,
Can somebody point where I can find the code where libvirt makes the
decision to complete a live migration?
I mean, at some point syncronising the VM state, it has to decide that
the delta left to be migrate is low enough to achieve downtime 0, so
libvirt finishes the migrati
Hello all,
I'm trying to get live migration without block copying working in a custom
solution, but I think I need a better understanding of the migration
process. In my environment, I have two iscsi targets exported which are
then placed into a raid via mdadm on the compute node. I've tried using
On 19.06.2013 11:28, cmcc.dylan wrote:
> I‘m very exited to hear that qemu has supported live migration
> without shared storage and I also have found some persons give their
> experiments, for example: (http://blog.allanglesit.com/2011/08/linux-
> kvm-management-live-migration-without-shared-s
I‘m very exited to hear that qemu has supported live migration without
shared storage and I also have found some persons give their experiments, for
example:
(http://blog.allanglesit.com/2011/08/linux-kvm-management-live-migration-without-shared-storage/)
# virsh migrate --live vmname qem
On 02/05/2013 08:25 AM, Анатолий Степанов wrote:
> Hello!
>
> Is there some way to migrate guest OS from one Xen-host to another KVM-host
> using libvirt?
> (live migration is highly desirable)
Live migration: no.
Offline migration: yes - you may want to check out the virt-v2v project
for this.
h
Hello!
Is there some way to migrate guest OS from one Xen-host to another KVM-host
using libvirt?
(live migration is highly desirable)
(Both hosts have the same hardware, Xen running in full-virtualized mode.)
___
libvirt-users mailing list
libvirt-user
On 02/01/2013 12:08 PM, James wrote:
> Hi,
>
> Could anyone provide (or point me to) information on how KSM merging
> behaves when performing live migrations? Does libvirt perform any
> special work to trigger KSM scanning of the migrated pages?
That's probably a question better asked to the qemu
Hi,
Could anyone provide (or point me to) information on how KSM merging
behaves when performing live migrations? Does libvirt perform any
special work to trigger KSM scanning of the migrated pages?
Any guidance is appreciated.
James
___
libvirt-u
Thanks for the comments. Forgot to mention it but yes, I do have
"cache=none" configured for the migrated domain, as
in my XML file, and doubled-checked before and after the migration. In
fact, if cache is not set to "none", virsh will issue a warning and
refuse to do the live migration.
B
On Sun, Nov 25, 2012 at 06:57:19PM +0800, Xinglong Wu wrote:
> Is there anybody having the similiar experience with live migration on
> non-shared storage? It apparently leads to failed migrations in
> libvirt but no cirtical errors ever reported.
Make sure you have your driver cache set to "none"
Hi,
We have the following environment for live-migration with
non-shared stroage between two nodes,
Host OS: RHEL 6.3
Kernel: 2.6.32-279.el6.x86_64
Qemu-kvm: 1.2.0
libvirt: 0.10.1
and use "virsh" to do the job as
virsh -c 'qemu:///system' migrate --live --persisten
Hi all,
we've been using libvirt/KVM ever since it was included in Ubuntu 10.04
LTS. At first, it wasn't possible to do live block migrations. We got
used to that.
And now block migration has been possible since Ubuntu 12.04 LTS. And
we're excited.
There's just one hitch:
Many of migratio
On 07/10/2012 06:46 AM, hcyy wrote:
> Hello, everybody. I use NFS to do live migration。After input virsh
> --connect=qemu:///system --quiet migrate --live vm12 qemu+tcp://pcmk-1/system
> (vm12 is vm name,/pcmk-1 is host name)it use almost 10s for preparation.
> During the 10s,the vm is sti
Hello, everybody. I use NFS to do live migration。After input virsh
--connect=qemu:///system --quiet migrate --live vm12 qemu+tcp://pcmk-1/system
(vm12 is vm name,/pcmk-1 is host name)it use almost 10s for
preparation. During the 10s,the vm is still runing and can ping other vm. But
if i i
Hi,
I am trying to migrate a running instance, but it fails with the following
error:
$ virsh migrate --live instance-0008 qemu+tcp://10.2.3.150/system --verbose
error: operation failed: migration job: unexpectedly failed
I can see following in the instance specific qemu log directory
(/va
On 10/28/2011 12:54 AM, Luengffy XUE wrote:
As to the actual error, I'm not sure if I can offer advice until I try to
reproduce it, but I do know that --copy-storage-all hasn't received as much
testing. Make sure that an empty file exists on the destination prior to
the migration attempt, in
2011/10/28 Eric Blake
> On 10/27/2011 09:37 PM, Luengffy XUE wrote:
>
>> command:
>>
>> virsh migrate --live --copy-storage-all vm
>> qemu+ssh://destinationHost/**system
>>
>> error
>> : unexpected failure
>>
>> PS log file in the jpg
>>
>
> Your jpg takes up a lot of bandwidth; it would be more
On 10/27/2011 09:37 PM, Luengffy XUE wrote:
command:
virsh migrate --live --copy-storage-all vm
qemu+ssh://destinationHost/system
error
: unexpected failure
PS log file in the jpg
Your jpg takes up a lot of bandwidth; it would be more efficient if you
copy and paste the text from your cons
I'd like to begin using live migrations with my KVM install, but I do not
have any shared storage. I will have 2 identical nodes, with their own
local storage. Everything I've read seems to point to DRDB, but I am not
finding many resources on how to implement DRDB + qcow2 + KVM.
Currently I've
于 2011年10月27日 11:16, Luengffy XUE 写道:
> Oh, I think I can describe the problem more specific.
>
> 1 Creat the vm in the source host and it run well.
> 2 Create the image in the destination and change the user and group
2)的意思是你自己在dest上创建了同名的image么?
> 3 Open TCP socket in the destination host.
你是咋
Oh, I think I can describe the problem more specific.
1 Creat the vm in the source host and it run well.
2 Create the image in the destination and change the user and group
3 Open TCP socket in the destination host.
In the source host I use the different command as
1 $ virsh migrate
? 2011?10?26? 22:22, Luengffy XUE ??:
Hi, all
I migrate a vm from source host to the destination host.
I am sure every part is ok for migration.
1 Create the image in the destination and change the user and group.
2 Open TCP socket in the destination host.
and I migrate in the source host co
I find this error is cause by this source C --savevm.c
http://tomoyo.sourceforge.jp/cgi-bin/lxr/source/savevm.c?v=qemu-kvm-0.12.3
I do not understand it. Anyone who develop it?
2011/10/26 vmnode guy
> Try this, i am not sure it will works..
> $ virsh -c qemu+ssh://destinationHost/sy
Try this, i am not sure it will works..
$ virsh -c qemu+ssh://destinationHost/system migrate --live
--copy-storage-all vmubuntu
But it works for me for below option:-
$ virsh -c qemu+ssh://destinationHost/system migrate list
Regards,
Peter
On Wed, Oct 26, 2011 at 10:22 PM, Luengffy XUE wrote:
Hi, all
I migrate a vm from source host to the destination host.
I am sure every part is ok for migration.
1 Create the image in the destination and change the user and group.
2 Open TCP socket in the destination host.
and I migrate in the source host command line.
$ virsh migrate --li
On 10/10/2011 04:28 AM, PREETHI RAMESH wrote:
Hi,
This is a follow up of the mail regd migration I'd posted a few days ago. My
scenario:
I'm creating two virtual machines images from an Ubuntu 10.10 iso of RAM
size 128 and 700 MB.
Using the migrate()
Which language binding? Show the actual c
Hi,
This is a follow up of the mail regd migration I'd posted a few days ago. My
scenario:
I'm creating two virtual machines images from an Ubuntu 10.10 iso of RAM
size 128 and 700 MB.
Using the migrate() , I migrate it from the current physical machine to the
destination machine via a 100 Mbps
On 05/23/2011 11:51 AM, prakash hr wrote:
> Hi all,
> For my academic project to analyse the Performance of transport protocols in
> Live Migration of Virtual Machines.I have configured the live migration
> using tcp and I succeed the live migration.
> Now for my further remaining project work I ne
Hi all,
For my academic project to analyse the Performance of transport protocols in
Live Migration of Virtual Machines.I have configured the live migration
using tcp and I succeed the live migration.
Now for my further remaining project work I need to do the same thing for
the UDP implemataiton.Wh
Sir,
I am working in cloud environment and i have set up a private cloud using
eucalyptus. I am trying to migrate virtual machines within the cloud using
virsh migrate command. I could successfully implement it but when i wanted
to know the method used to do live migration. I tried to interpret the
I have two servers running Ubuntu 9.10, with shared disk from ISCSI SAN
and OCFS2, identical network configurations, shared ssh keys, and pki
keys. My vm will boot from either machine and run correctly. When I
attempt to do a live migration with "migrate --live base32 qemu
+[ssh/tcp]://vm1/system
91 matches
Mail list logo