[be care, my english's awfull, and i hope my question is not too stupid]
Hi,
In Lenny, how i can use an nfs server in VE ?
In Host :
modprobe nfsd works (/proc/net/rpc created)
But in VE /proc/net/rpc not (mount --bind from host to VE don't work,
did i miss anything ?)
# aptitude search vz |
Hi all,
we have a couple of servers with OpenVZ RHEL 5.4 kernel, and sharing a
GFS filesystem.
We have done some locking tests with "ping_pong" over the GFS
filesystem, and we have realized that GFS is doing local flocks, not
distributed ones.
It is not due to mount options:
# mount -t gfs -o
I have a template that I use which needs a certain capability enabled
each time I deploy it. Is there an easy way to set this capability
inside the template itself?
I am sick of having to use this every time:
vzctl set # --capa sys_admin:on --save
If there's a way to script this, that would b
Hi,
add capability and more settings to your ct template config file,
like:
$ echo 'CAPABILITY="SYS_ADMIN:on "' >>
/etc/vz/conf/ve-.conf-sample
And deploy it ... // some capa need a restart
$ vzctl set --applyconfig template_cfg [...]
or use on your creation:
$ vzctl create --config template_
> [be care, my english's awfull, and i hope my question is not too stupid]
Not problem, in this mail list english is not native languge for more
people (my English is bad too :( )
Please read this manual:
http://wiki.openvz.org/NFS
You don't may use kernel-mode NFS server inside container, but,
Also, you may use mount --bind: http://wiki.openvz.org/Bind_mounts to
sharing files beetwen different VE located on one NardwareNode (i use
it, it work fine, you may need rename container numbers for correct
mount in scripts aftrer node rebooting)
And, try this manuals: http://wiki.openvz.org/Moun
Hello,
You can add this in your VE config. The default VE config is
/etc/vz/conf/ve-basic.conf-sample or /etc/vz/conf/ve-basic.conf-sample
depending on distro (you can change this via CONFIGFILE option in
/etc/vz/vz.conf). So you can add
CAPABILITY="NET_ADMIN:on "
into default config or create
We've just started using OpenVZ on a test box here at work and I've
come across some odd things I don't understand, and can't figure out
from the wiki.
It began when I attempted to start an NIS client on a CT and it
couldn't bind to the NIS server, which is on the host system. However,
other netwo
Steve,
- "Steve Scaffidi" wrote:
> We've just started using OpenVZ on a test box here at work and I've
> come across some odd things I don't understand, and can't figure out
> from the wiki.
>
> It began when I attempted to start an NIS client on a CT and it
> couldn't bind to the NIS server
In your /etc/yp.conf file do you specify "broadcast"? That won't work
in a vz container. (domain broadcast). I get around this by
specifying a server:
domain server
E Frank Ballefb...@efball.com
On Fri, Jan 08, 2010 at 12:03:52PM -0500, Steve Scaffidi wrote:
> We've ju
I found it... it was one of those "so blatant you don't even notice" errors.
somehow the /etc/hosts file in the CT looked like this:
10.1.10.101 testserver.mydomain.com testserver testserver
After removing the extra hostname, everything works.
BTW, I've since figured out why the odd routing ent
11 matches
Mail list logo