> > I do not understand why it does not work with your DRBD driver ...
> It is no question of right or wrong. It is simply inconsistent.
> Yes, it doesn't matter, because you "repair" it when migrating a VM and you
> are
> right only then it could be a problem. But I simple like it, when it is
> c
Hi!
>> > First, we call deactivate if you migrate a VM to another node.
>> OK, I haven't seen this. I will test once my plugin is ready.
The plugin is ready now and I could test this today.
-> It works like you said.
> I simply do not understand why you think the current approach is wrong. It
> Actually, current approach can theoretically (I don't currently have Ceph on
> PVE4 to test it) lead to problems with Ceph with krbd option on. If VM has
> voluntary stropped, then deactivate_volume() is not being run and rbd map is
> not deleted.
> Then, you may somehow migrate machine to anothe
11.12.2016 21:16, Dietmar Maurer wrote:
> I simply do not understand why you think the current approach is wrong. It
> works
> with all major storage types, i.e. Ceph, sheepdog, iscsi, NFS, Gluster,
> DRBD on LVM in dual primary mode, DRBD9, ...
Actually, current approach can theoretically (I don'
> > First, we call deactivate if you migrate a VM to another node.
> OK, I haven't seen this. I will test once my plugin is ready.
>
> > For HA, you need to fence the node before you start it on another node,
> If I am using HA. Currently I have two machines and as a first step I plan to
> do a
> IMHO DRBD8 is old and replaced by DRBD9. Linbit will provide a DRBD9 driver
> in future
Yes they will, but as stated in http://pve.proxmox.com/wiki/DRBD9
"DRBD9 integration is introduced in Proxmox VE 4.x as technology preview."
and Linbit says (http://www.gossamer-threads.com/lists/drbd/
> > Why? There is no real need to deactivate a volume, unless you move the VM
> > to another node.
> I already told you why:
>For a DRBD8 plugin, which I want to write, it is essential to switch the
IMHO DRBD8 is old and replaced by DRBD9. Linbit will provide a DRBD9 driver
in future
B
> Why? There is no real need to deactivate a volume, unless you move the VM
> to another node.
I already told you why:
For a DRBD8 plugin, which I want to write, it is essential to switch the
volume back to secondary after using it. Otherwise it can't be used on the
other machine and you nee
> >> It seems the storage plugin function "deactivate_volume" will be not
> >> executed, when the VM stops by issuing a "poweroff" command.
> > That is by design ...
> So there is NO interface between the code who already detects this
> (eg.: "pvedaemon[6695]: client closed connection") to the s
Hi!
>> It seems the storage plugin function "deactivate_volume" will be not
>> executed, when the VM stops by issuing a "poweroff" command.
> That is by design ...
So there is NO interface between the code who already detects this
(eg.: "pvedaemon[6695]: client closed connection") to the storage
> It seems the storage plugin function "deactivate_volume" will be not executed,
> when the VM stops by issuing a "poweroff" command.
> Hint: It is executed, when you klick the Shutdown button from the GUI.
That is by design ...
> I have seen
> pvedaemon[21622]: client closed connection
> in
Hi!
It seems the storage plugin function "deactivate_volume" will be not executed,
when the VM stops by issuing a "poweroff" command.
Hint: It is executed, when you klick the Shutdown button from the GUI.
I have seen
pvedaemon[21622]: client closed connection
in the log, but no further storag
12 matches
Mail list logo