On Fri, Dec 9, 2016 at 1:17 AM, Alexandre DERUMIER
wrote:
>
> - implement a gui/tool to manage multiple clusters
>
That would be awesome!
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
I forgot 2 others requests:
- add an option to usbX, to allow migration. (2 physical node with same usb
device (usb dongle for example)
- allow migration to another proxmox cluster (different cluster with different
storage, I think it could be done by extending my live storage patches)
> We were just surprised that the vm stay in error and that we can't start it
> manually.
> (maybe manual start should reset the HA error ?)
This is the default mechanism we copied from rgmanager. The reasoning is
that the admin needs to make sure that the VM is really down, and not running
somewh
Hi,
here some request from students from this training week.
(BTW, we don't have found any bug)
- add a qm command to import disks from other hypervisors (create vm, volumes,
and qemu-img mirror).
I think some patches have been sent in the mailing last year but never
finished
- installer
>>But I do not really understand why you want such setup?
That was just a test during the training.
We were just surprised that the vm stay in error and that we can't start it
manually.
(maybe manual start should reset the HA error ?)
BTW, the new gui for HA with priorities is very good !
> > Is it a bug ?
>
> What you describe is expected behavior.
>
> IMHO it makes not much sense to define a
> Ha group with only one node ...?
Well, you could set max_restart to a very high value, so
that it tries until that node is up again.
Maybe we can even makes this a special case, and ret
> Is it a bug ?
What you describe is expected behavior.
IMHO it makes not much sense to define a
Ha group with only one node ...?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Hi,
we are currently testing HA with last proxmox from no subscription repository,
and I think we have a bug for a specific case.
we defined an HA group with only 1 server : kvmformation1, with restricted.
vm106 is in the hagroup and run in kvmformation1.
when kvmformation1 is crashing, the vm