Hello,
I am back to the initial pb related to that post , since I updated to
/OpenVZ release 7.0.14 (136) | ///Virtuozzo Linux release 7.8.0 (609)//
, I am also facing CT corrupted status .
I don't see the exact same error as mentioned by Kevin Drysdale below
(ploop/fsck) , but I am not abl
Hello
because I have a fail CT on a hardware node , cf
https://lists.openvz.org/pipermail/users/2020-July/007928.html
I moved manually the CT files (hdds, conf, etc ...) to another hardware
node (HW) that doesn't seem to have the pb yet ( not updateded to OpenVZ
release 7.0.14 (136))
I did
On 07/06/2020 06:07 PM, Jehan Procaccia IMT wrote:
Hello
because I have a fail CT on a hardware node , cf
https://lists.openvz.org/pipermail/users/2020-July/007928.html
I moved manually the CT files (hdds, conf, etc ...) to another hardware
node (HW) that doesn't seem to have the pb yet ( not u
thanks that works fine after prl-disp.service , my "manually
moved/restored" CT can now run on the second Hardware Node
# systemctl restart prl-disp.service
# prlctl list --all |grep 144dc737-b4e3-4c03-852c-25a6df06cee4
{144dc737-b4e3-4c03-852c-25a6df06cee4} suspended 192.168.1.1 CT ldap2
# pr
Hello
If it can help, what I did so far to try to re-enable dead CTs
# prlctl stop ldap2
Stopping the CT...
Failed to stop the CT: PRL_ERR_VZCTL_OPERATION_FAILED (Details: Cannot
lock the Container
)
# cat /vz/lock/144dc737-b4e3-4c03-852c-25a6df06cee4.lck
6227
resuming
# ps auwx | grep 6227
ro
You usually have to restart the "prl-disp" service when you have this
kind of problems and/or to "prlctl unregister" and/or "vzctl unregister".
On 7/6/20 5:07 PM, Jehan Procaccia IMT wrote:
Hello
because I have a fail CT on a hardware node , cf
https://lists.openvz.org/pipermail/users/2020-J