Hi Dietmar,
As said, the node has tradtional vmbr (brctl) bridges. So with that
setup, I do not know how to do what you suggest. But I am happy to learn.
And as far as I can tell on my test server that uses openvswitch, I can
only assign one tag to an interface in a container.
So also that will not work. If I could assign multiple VLAN's to an
openswitch based container interface then I could create the vlan
interfaces inside the container.
Ending up with as many vlan devices required in the container, so im my
case with more than 10.
That would - however - require changing the current production setup on
the OVH server(s) to switch from traditional bridging to openvswitch.
OVH servers are good in price/performance. Support is not so good and
there is no console, so if something goes wrong you have to order (and
pay for) a kvm to be attached for one day. That can take up to an hour
or so to be performed as it is work that has to be performed manually by
a site engineer in the data center.
But if there is a way, then I would be more than glad to learn about it.
Kind regards,
Stephan
On 23-08-2020 16:24, Dietmar Maurer wrote:
If it would be possible to provide a 'trunk' openvswitch interface to
the CT, then from within the CT vlan devices could be setup from the
trunk, but in the end that will still create 10+ interfaces in the
container itself.
Cant you simply use a single network interface, then configure the vlans
inside the firewall?
IMHO, using one interface for each VLAN is the wrong approach. I am sure
next time people will ask for 4095 interfaces ...
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel