ok, it' a bit desappointing that I ended in that dead end .
If namespaces are properly isolated then I guess that I might have something wrong in my config, perhaps my bridge configuration or the way I use nftables, my deisappointment is that I don't know where to start debug that pb , on the hardware node, on the CT  .. ?

Anyway I will try a workaround by creating VMs instead of CTs in order to run mutiple instances of openvpn serveur on the same hardware node I'll let you know , but I guess i'll be sucessfull as it should be even more isalated in VMs ! .

Thanks .

Le 04/03/2020 à 11:29, Vasily Averin a écrit :
Dear Jehan,
we are not aware about described problem.
It looks quite strange for me, we believe network namespaces are properly 
isolated
and any settings (including any netfilter configuration) in one network 
namespace should not affect another ones.

Thank you,
        Vasily Averin

On 3/4/20 11:06 AM, jehan.procac...@imtbs-tsp.eu wrote:
I did some more tests to try to resolve the SNAT/Postrouting problem  
concurrency on VZ7 same host .
definitively I confirm that I cannot have more than 1 CT doing SNAT on a single 
hardware node host .
If I vzMigrate the second CT (failing to SNAT) to a different hardware node, 
then it works fine for it as long as it is the only one doing SNAT on that 
other hardware node.
So in order to have mutiple instances of openvpn CTs (SNAT in bridge mode), I 
need to dispatch them on different hardware nodes.

while on the same hardware node, I also tried to change the 2nd CT to a 
different subnet IP and bridge interface, with no success either .

let me know if it is possible to run 2 or more containers doing SNAT on the 
same host ?
is it configaration pb on my side or a feature not available in VZ  ?

regards.

Le 02/03/2020 à 17:22, jehan.procac...@imtbs-tsp.eu a écrit :
Hello

back to VZ netfilter, I still encounter difficulties with NAT (SNAT / 
POSTROUTING) in openvpn containers working in concurrency .
with 2 openvpn containers using SNAT in PostRouting , only one can do it , the 
second one doesn't perform the SNAT anymore.
if I stop the 1st one and restart the second (runing alone) then it works for 
the second and vice-versa .
To enable SNAT for my 2 containers in concurrency  I had to move my second container to a 
different host, then it work fine to run both , as I was expecting this "hack" 
to work, it is not a clean solution .
Do you have an idea why I cannot run 2 SNAT postrouting containers on a single 
host ?

tech info :
hardware node hosting the containers
# uname -a
Linux hardhost.int.fr 3.10.0-1062.4.2.vz7.116.7 #1 SMP Mon Dec 9 19:36:21 MSK 
2019 x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/redhat-release
Virtuozzo Linux release 7.7

on containers I did set full netfilter
# prlctl set ctvpn1 --netfilter full
# prlctl set ctvpn2 --netfilter full

inside containers I tried and struggle with different tools to enable SNAT
1) iptables *nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A POSTROUTING -o eth0 -j SNAT --to-source 157.109.2.13
2) firewalld with MASQUERADE (which never worked , but maybe because I didn't 
realised my concurrency pb in the 1st place ... and after all my eth0  IP 
nerver change so it's more efficient to use SNAT instead of Masquerade ! )
firewall-cmd --permanent --direct --passthrough ipv4 -t nat -A POSTROUTING -s 
10.91.10.0/22 -o eth0 -j MASQUERADE
3) nftables
  cat /etc/nftables/ipv4-nat.nft (with /etc/sysconfig/nftables.conf uncomment 
on include ipv4-nat.nft)
     chain postrouting    { type nat hook postrouting priority 0;
                     oifname "eth0" snat to  157.109.2.13
                 }
I tried those 3 solutions either with centos 7 and centos 8 containers, both 1) 
and 3) works fine as long as I run only one SNAT container on the same hardware 
node .

Let me know if it is possible to run 2 or more containers doing SNAT on the 
same host ?

Thanks .


Le 26/02/2020 à 23:40, jehan.procac...@imtbs-tsp.eu a écrit :
I finally found a working solution, not a VZ pb but rather an openvpn-server configuration => I 
move to "proto tcp" instead of "proto udp" ! both proto worked to open the VPN , 
but with udp routing didn't worked,
thanks to your 5 steps check procedure I realized that at step 3) "tcpdump on vpn's 
tun device to check that the packets do arrive", packets didn't arrived at vpn's 
tun0 interface. moving both client and server to tcp , then it worked .
Perhaps I have a pb with SNAT / iptables while using UDP ? , but setting the 
following iptables rules works fine in TCP :
*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A POSTROUTING -o eth0 -j SNAT --to-source 157.109.2.13
while it works fine in TCP I will still investigate on UDP as it seems to be 
better optimize for VPNs: cf 
https://hamy.io/post/0002/openvpn-tcp-or-udp-tunneling/

Back to VZ initial questions:

2) refering to 
https://docs.virtuozzo.com/virtuozzo_7_users_guide/managing-network/networking-modes-in-virtuozzo.html
I do use bridge mode, I have veth interfaces on server host and eth0 
counterparts on containers

2.1) it should be working with firewalld as describe in NAT section of 
https://r.je/openvpn-nat
or in step 11 of : https://tecadmin.net/install-openvpn-centos-8/
but for the sake of simplicity I revert back to iptables 
https://www.digitalocean.com/community/tutorials/how-to-migrate-from-firewalld-to-iptables-on-centos-7
other usefull link
https://wiki.archlinux.org/index.php/OpenVPN_server_in_Linux_Containers

2.3) I kept using eth0 and it work fine, I suppose that I see eth0@if*70* 
because on the server host there's a corresponding
*70*: vme001851fefa53@if3 <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
noqueue master br2 state UP group default qlen 1000
probably a "link" between both counterpart of host veth and CT eth0 !?

2.4) rp_filter is set to 1 , howerver it works fine now (tcp)
CT# cat /proc/sys/net/ipv4/conf/all/rp_filter
1
I can set it to 0 , but I am not sure it is interpreted as is in the CT 
context, or is the /proc/sys/net/ipv4/conf/all/rp_filter value on the server 
host prevalent ? both share the same kernel , do they share /proc ?

thanks for your detailed help .

regards .


Le 25/02/2020 à 22:11, Dmitry Konstantinov a écrit :
1) I meant you don't need any special capabilities to run openvpn.
Just the tun device should be available.

2) Sorry for the confusion, I meant the openvz networking. routed (venet
device) or bridged (veth).

2.1) I don't use firewalld and not familiar with its syntax.
2.2) it really depends on how you wish the packets to travel.
if they are supposed to go through eth0 then you need to use
eth0 in all the configurations.
2.3) I honestly don't know if a name like eth0@if248 is going to
be accepted by the tools.
2.4) I am not sure but probably in case of bridged networking
you will need to disable rp_filter. Not necessarily, depends on
your configuration. Set the sysctl variable to zero.

Example:

In my particular case openvpn is used to access private network at
remote location. I've got two addresses configured on venet0 device,
  let's say 123.124.125.126 and 192.168.192.168.
Private network is 192.168.192.0/24
openvpn uses 192.168.10.0/28 internal subnet with 192.168.10.1 being
assigned to tun0

openvpn config:
--
topology        subnet
ifconfig        192.168.10.1 255.255.255.240
mode            server

server          192.168.10.0 255.255.255.240
push            "route 192.168.192.0 255.255.255.0"
--

sysctl.conf:
net.ipv4.ip_forward = 1

iptables:
:POSTROUTING ACCEPT [0:0]
[0:0] -A POSTROUTING -s 192.168.10.0/28 -j SNAT --to-source
192.168.192.168
COMMIT

That's all

now let's say I know for sure 192.168.192.100 is up and running in
the private network. I have an established connection to the vpn
from my local machine and try to ping it but there's no response.
I'd probably check things in the following order:

1) ip a l; ip r l on local machine to check that I have the connectiom
established and the route active

2) tcpdump on local tun device to check that the packets do leave

3) tcpdump on vpn's tun device to check that the packets do arrive

4) tcpdump on vpn's eth/venet device to check if the packets are routed
between interfaces and have the source address changed.

5) ping from vpn container - you might have weird filtering on the
server that hosts the container.



On Tue, 25 Feb 2020 16:32:43 +0100
Jehan Procaccia <jehan.procac...@imtbs-tsp.eu> wrote:

OK for 1) , then I don't need any capability (net_admin, sys_time), I
was wondering because I read that on lots of docs as in :
https://github.com/OpenVZ/vz-docs/blob/master/virtuozzo_7_users_guide.asc
perhaps deprecated ?

for 2) I use routed openvpn (tun0)
yes I mess a lot between iptables and firewalld while debungin my pb
2.1) I would prefere to use firewalld , can you confirm me the rule
you use ?
POSTROUTING with masquerade or have you an iptable SNAT exemple ?
2.2) if I use a eth0 interface do you confirm that venet0 (that is
Down on my CT) is not concerned at all ?
2.3) my eth0 appears as eth0@if248 (ip addr) , is it important for
the firewall-cmd command arguments => "-o eth0" ? should I use -o
eth0@if248 ! 2.4) what do you mean by |rp_filter| (reverse path
filtering), should I disable it , how ?

Thanks .


Le 25/02/2020 à 14:54, Dmitry Konstantinov a écrit :
openvpn does work. dev/tun:rw and full netfilter is all the
'extras' I have in the container's config

1) not sure if it's still works but probably not useful in
this particular case, never used any capabilities for openvpn.

2) I use a single postrouting rule. Like the last one in your list.


I don't quite understand your setup. Do you use routed or bridged
networking? with firewalld you configure eth0 but I see venet0 in
iptables. I don't have much experience with eth devices inside
container, perhaps you might need to configure rp_filter for it
to work with openvpn.





On Tue, 25 Feb 2020 10:21:33 +0100
Jehan Procaccia <jehan.procac...@imtbs-tsp.eu> wrote:
Hello

I have running VPNs that works perfectly on openvz6 , now I move to
openvz7 and I cannot make it forward or masquerade between
interfaces .

I am questionning about different concepts:

1) is enabling capablities still enable/usefull ?

ie: prlctl set ctvpn --capability net_admin:on => doesn't save
anything in the CT conf ...

I did set

prlctl set ctvpn --netfilter full  => in order to have nat and
mangle chains

2) is using iptables or firewalld determinent ? masquerade or
SNAT ?

neither of those works

for Masquerade I did

firewall-cmd --permanent --direct --passthrough ipv4 -t nat -A
POSTROUTING -s 10.91.10.0/22 -o eth0 -j MASQUERADE

for iptables I tried with

*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A POSTROUTING -o venet0 -j SNAT --to-source 157.109.2.13
-A POSTROUTING -s 10.91.10.0/22 -j SNAT --to-source 157.109.2.13

by the way is venet0 important as it appears down in the CT !?

2: venet0: <BROADCAST,POINTOPOINT,NOARP> mtu 1500 qdisc noop state
DOWN group default
       link/void
3: eth0@if248: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP group default qlen 1000

dev/tun is working correctly

I set it with: vzctl set ctvpn --devnodes net/tun:rw --save

CT-ABC /# ls -l /dev/net/tun
crw-rw-rw- 1 root root 10, 200 Feb 25 10:07 /dev/net/tun
CT-ABC /# cat /dev/net/tun
cat: /dev/net/tun: File descriptor in bad state
=> message that means it is operational !

openvpn uses tun interface, connecting clients to openvpn server
works fine, but routing between interfaces (tun0 and eth0 ) doesn't
work .

of course ip_forward is enabled

CT-ABC /# cat /proc/sys/net/ipv4/ip_forward
1

Thanks for your help .

_______________________________________________
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


_______________________________________________
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


_______________________________________________
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users

_______________________________________________
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


_______________________________________________
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users

Reply via email to