I just installed OpenVZ on CentOS 6.x 64 bit following this guide a
few days ago.
http://openvz.org/Quick_Installation_CentOS_6
I am mostly interested in creating containers with veth interfaces so
I can assign multiple public IPv4 and IPv6 addresses from inside the
container. I noticed that whe
> I just installed OpenVZ on CentOS 6.x 64 bit following this guide a
> few days ago.
>
> http://openvz.org/Quick_Installation_CentOS_6
>
> I am mostly interested in creating containers with veth interfaces so
> I can assign multiple public IPv4 and IPv6 addresses from inside the
> container. I no
>> I just installed OpenVZ on CentOS 6.x 64 bit following this guide a
>> few days ago.
>>
>> http://openvz.org/Quick_Installation_CentOS_6
>>
>> I am mostly interested in creating containers with veth interfaces so
>> I can assign multiple public IPv4 and IPv6 addresses from inside the
>> contain
>> I just installed OpenVZ on CentOS 6.x 64 bit following this guide a
>> few days ago.
>>
>> http://openvz.org/Quick_Installation_CentOS_6
>
>> Can anyone tell me what is going on here? After further testing I
>> determined it does not always do it. The first time I stop a
>> container after I r
Does Vzdump place the /etc/vz/conf/101.conf somewhere in the archive?
I cannot find it if so. If its not in there is the info that's
normally in it stored in the archive somehow or do I need to back that
up separately?
Thanks.
___
Users mailing list
Use
I have several bridged containers I need to run iptables on. I
assumed since they were bridged it would just work. Are there any
knobs I must turn to enable iptables on the container?
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mail
> I have several bridged containers I need to run iptables on. I
> assumed since they were bridged it would just work. Are there any
> knobs I must turn to enable iptables on the container?
In vz.conf I have:
## IPv4 iptables kernel modules to be enabled in CTs by default
IPTABLES="ipt_REJECT i
I installed OpenVZ following this guide.
http://openvz.org/Quick_Installation_CentOS_6
I know its not an official guide but I need bridged containers.
Installing Directadmin on them and they need control over there
interfaces to add and remove IP addresses.
>>Create a CT
>># vzctl create 102 --
Tried to backup up a busy Centos 6.x 64 bit container with vzdump and
ran into this. The host is fresh install of openvz on Centos 6.x as
well.
Linux 2.6.32-042stab084.20 #1 SMP Mon Jan 27 00:40:08 MSK 2014 x86_64
x86_64 x86_64 GNU/Linux
vzdump --compress --suspend 103
Mar 05 22:53:55 INFO: Sta
> - Original Message -
>> Tried to backup up a busy Centos 6.x 64 bit container with vzdump and
>> ran into this. The host is fresh install of openvz on Centos 6.x as
>> well.
>>
>> Linux 2.6.32-042stab084.20 #1 SMP Mon Jan 27 00:40:08 MSK 2014 x86_64
>> x86_64 x86_64 GNU/Linux
>>
>> vzdum
> # To check if modules are loaded
> lsmod | grep vzcpt
> lsmod | grep vzrst
>
> # To load modules
> depmod vzcpt ; depmod vzrst
>
> How to have them loaded in the future? That last command should take care of
> it I believe.
Yes, I was asking how to make sure they are loaded after reboot.
Still
> modprobe vzcpt
> modprobe vzrst
>
> Also, vz initscript (/etc/init.d/vz) should load these, but I have seen
> cases when it doesn't -- still can't figure out why though.
I did service vz restart and they loaded.
___
Users mailing list
Users@openvz.org
I used this guide to install OpenVZ.
https://openvz.org/Quick_Installation_CentOS_6
I am setting certain containers up as bridged so I can run IPv6 on
them. I have IPv6 working on the host but I cannot get it to work on
the container. The host and container can IPv6 ping each other but
the cont
arding = 1
> net.ipv6.conf.all.proxy_ndp = 1
>
>
> Also check your ip6tables in host .
>
>
>
>
>
>
> On 07/10/2014 05:15 PM, Matt wrote:
>
> I used this guide to install OpenVZ.
> https://openvz.org/Quick_Installation_CentOS_6
>
>
> I am setting certain conta
I have OpenVZ installed on CentOS 6.x following this guide.
https://openvz.org/Quick_Installation_CentOS_6
Everytime CentOS releases a new kernel it replaces the OpenVZ kernel
as first boot option. Is there anyway around this other then editing
the boot order manually after running yum update?
_
I see:
[root]# rpm -qa |grep kern
kernel-2.6.32-431.11.2.el6.x86_64
vzkernel-2.6.32-042stab088.4.x86_64
vzkernel-2.6.32-042stab090.3.x86_64
vzkernel-2.6.32-042stab092.1.x86_64
vzkernel-2.6.32-042stab092.3.x86_64
dracut-kernel-004-336.el6_5.2.noarch
kernel-2.6.32-431.5.1.el6.x86_64
kernel-2.6.32-43
; kernel updates in the yum config. I also find it useful to be able to
>> boot into a vanilla centos kernel for testing - just in case.
>>
>>
>> On Mon, Jul 28, 2014 at 2:36 PM, Matt > <mailto:matt.mailingli...@gmail.com>> wrote:
>>
>> I see:
>
https://openvz.org/Quick_installation
Any install guides for Centos 7 yet?
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
> It will probably be a few months before they have anything available for the
> RHEL7 kernel (3.10) and even longer for it to become stable. Ploop also
> doesn't support XFS so time will tell if that will change or you'll have to
> continue with ext4.
Need to update a server with a crashed hard
I have a container currently using about 150GB of space. It is very
random I/O hungry. Has many small files. Will converting it to ploop
hurt I/O performance?
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
In ploop, can inodes and disk size easily be increased for a container?
On Wed, Sep 24, 2014 at 7:34 PM, Kir Kolyshkin wrote:
> On 09/19/2014 11:45 AM, Matt wrote:
>>
>> I have a container currently using about 150GB of space. It is very
>> random I/O hungry. Has ma
Is there a command for backing up and restoring like vzdump that works
with ploop?
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
>> Is there a command for backing up and restoring like vzdump that
>> works with ploop?
>
> https://wiki.openvz.org/Ploop/Backup
Been studying that. Does not seem to compress backup. Also, does not
spell out how to restore it. Guess I can experiment but I really
hopped ploop was far enough alo
I need to install openvz on Centos 6 and I need to support both venet
and veth containers.
http://openvz.org/Quick_Installation_CentOS_6
This install method does that but says its not supported/unofficial.
Are there supported install directions for this?
Thanks.
_
t;
>
> On 10/1/2014 5:17 PM, Matt wrote:
>>
>> I need to install openvz on Centos 6 and I need to support both venet
>> and veth containers.
>>
>> http://openvz.org/Quick_Installation_CentOS_6
>>
>> This install method does that but says its not sup
Recently IPv6 quit working on my Openvz CentOS 6 box. No longer works
on host or containers. I imagine it was after a yum update. Whats
weird is if I reboot the OpenVZ host machine IPv6 starts to work again
on the host and the containers for about 5 minutes but then quits
again. Any ideas?
I i
> can you ping the host machine from within the container? What does "ip
> - -6 r l" say? Is there anything in /var/log/messages regarding IPv6?
Yes I can ping my own IPv6 IP.
>
> In my host machines I had to do that in order to get IPv6 working
> properly:
>
> for proxy_ndp in /proc/sys/net/ipv6
How do I tell if a container is ploop or simfs? How do I convert from
ploop to simfs?
I need to move a few containers from a centos server to a Proxmox
server that does not support ploop is the reason.
___
Users mailing list
Users@openvz.org
https://lis
> - Original Message -
>> How do I tell if a container is ploop or simfs? How do I convert
>> from ploop to simfs?
>
> Yep, the vzlist command with the appropriate field flag (as mentioned by the
> previous poster) will work. Here's the recipe for a vzlist output similar to
> stock just
http://openvz.org/Quick_installation
Will this installation allow bridged containers? I need containers
able to modify there own IP addresses, IPv4 and IPv6, and with quotas.
Centos containers with Directadmin.
___
Users mailing list
Users@openvz.org
ht
t takes swap into account
which is generally a bad idea.
Regards,
Matt
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users
The same bug exists in Virtuozzo, "ln -s /proc/self/fd /dev/fd" appears
to be the fix.
Reference: http://www.tektonic.net/forum/showthread.php?t=1936
Steve Hodges wrote:
On 14/08/2007 1:58 AM, Gregor Mosheh wrote:
Or perhaps your VE has maxed out the number of FDs it's allowed to
have? Chec
Cliff Wells wrote:
I've got the following hardware:
CPU: 2x1.6Ghz Opteron 242 (may upgrade to a pair of 2.4Ghz dual-core
880's at some point)
RAM: 8GB Dual channel DDR-400 ECC (may upgrade to 16GB at some point)
Swap: 2x16GB partitions, one enabled
Disks: 6x160GB 7200rpm SATA-160 on 6-channel LS
Gregor Mosheh wrote:
Hey guys.
Some of our VPSs take well over an hour to start up after an
unexpected shutdown, thanks to vzquota. Reading the man page, it seems
that vzquota cannot be skipped and then done later once the services
are up. Am I correct about that, or IS there a way to skip quo
Gregor Mosheh wrote:
Matt Ayres wrote:
All you have to do is set VZFASTBOOT=yes in /etc/vz/vz.conf and VPS's
will be started without quota and then will be restarted to calculate
the quota after all have been started.
Interesting. I saw FASTBOOT in the /etc/init.d/vz script and that
Gregor Mosheh wrote:
Matt Ayres wrote:
All you have to do is set VZFASTBOOT=yes in /etc/vz/vz.conf and VPS's
will be started without quota and then will be restarted to calculate
the quota after all have been started.
I think I see where we're misconnecting. You say that FASTBOOT w
Gregor Mosheh wrote:
Hm, I'll take your word for it, though it doesn't sound right. I don't
want them to be restarted with a full quota recalculation - I don't
want quota recalculation at all. These fellas have 250 GB quotas and
are using most of it, so the recalculation takes 60+ minutes per
It does not appear 2.6.18 is being maintained.
http://wiki.openvz.org/Download/kernel
Thanks,
Matt
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users
duler is better than using the default cfq?
deadline may provide overall more system throughput, but the OpenVZ I/O
priorities are an extension of the CFQ elevator so no CFQ = no I/O
priorities.
Thanks,
Matt
___
Users mailing list
Users@o
tl version 3.0.22
Cheers,
Matt.
smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users
On 3/15/2010 11:22 AM, Aleksandar Ivanisevic wrote:
I really have no idea what to make of this. Although HN's load average
is almost zero, most of the containers have LA in high 3 digits range.
To make it even more interesting, LA fluctuates from sngle to triple
digits within a minute, below are
care to clue me in, here?
Two things I can think of:
1) They are not loading the 'tun' drive on the host on boot.
2) they are not using vzctl --save option when enabling tun device for
your VPS.
Thanks,
Matt
TekTonic
___
Users mailing list
42 matches
Mail list logo