On 13.05.2015 2:09, Pavel Odintsov wrote:
Completely disagree with "After hitting bug https://bugzilla.openvz.org/show_bug.cgi?id=2470 I completely disable suspending on stop for all hardware nodes, - "VE_STOP_MODE=stop" in /etc/vz/vz.conf and don't use it at all."
Sorry, but I really set "VE_STOP_MODE=stop" in /etc/vz/vz.conf because checkpointing too slow on my hardware for many containers and HDD disks without SSD, and just stop/start is much faster than suspend/resume all CT for hardware node reboot. So, "VE_STOP_MODE=stop" provides minimal downtime. And yes, bug https://bugzilla.openvz.org/show_bug.cgi?id=2470 prevents starting nginx after CT resume after hardware node reboot. I need most stable/reliable server - this is the first line priority.
We use cpt/rst for ten of thousands containers for few years. And in 99.9% cases it works with charm. And it's one os most killer features of OpenVZ.
But why I need to use cpt/rst with OpenVZ ? CT must be online and uptime always, without downtimes during cpt/rst. If CT is completely damaged/broken - I just restore it from backup. --- If you talk about live migration of CT between hardware nodes - I can't easy use this feature with current main hosting provider: Hetzner allow only max 3 Failover IPs, with € 4.20 / month price for each IP and additional € 12.61 / month for Flexi-Pack. more details here: http://wiki.hetzner.de/index.php/Failover/en Also bash script for swithing IP between servers is not trivial: http://wiki.hetzner.de/index.php/Failover_Skript/en And Hetzner Failover subnet can't be used with OpenVZ, because "A failover subnet can only be switched as a whole, single IPs from the subnet can not be switched individually". So using CT live migration with Hetzner is looks like very costly and limited solution - max only 3 OpenVZ CT can be live migrated. May be other hosting provides has other restrictions, but right now I mostly use Hetzher.de as the winner for price/performance ratio after protecting most valuable sites from DDoS via CloudFlare.com --- Also the main reason why I can't use OpenVZ live migration is incompatibility between OpenVZ live migration and ZFS, as I understand - for live migration I must use ploop images located on ext4 filesystems and can't use simfs on top of ZFS. But ZFS is most natural way to get optimal price/performance ratio, with very high level of reliability of storage subsystem based on slow big HDDs and fast SSDs for ZFS L2ARC. So, evaluate benefits of ZFS and OpenVZ live migration I should select ZFS and can't use live migration at all. --- Now, as I understand, main trend in DevOps / Continuous Delivery is approach http://martinfowler.com/bliki/ImmutableServer.html with on-fly switching between online instances via method of http://martinfowler.com/bliki/BlueGreenDeployment.html And many new userland utilites are now created for this purposes: for example, Docker.com and coreos/rkt with App Container: http://www.opennet.ru/opennews/art.shtml?num=41168 http://www.opennet.ru/opennews/art.shtml?num=41545 As for me, ideal server is Linux hardware node with ability to run on top of it KVM virtual machines, OpenVZ containers, and probably some Application Container Specification Implementations inside OpenVZ containers and on top of hardware nodes simultaneously. This will allow seamless migration from KVM-based Linux virtual machines to OpenVZ containers and in future - also seamless software migration from OpenVZ CTs to App Container Images and App Container Runtimes. -- Best regards, Gena _______________________________________________ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users