What I do is on the gateway router create a private range and a public
range. A DHCP server hands out IPs from the private range, and the VM
hosts themselves can have management access in that range. Give each VM
a bridged interface. On first bootup they'll get a private IP with
DHCP, and that'll give you management access, allow software updates,
etc. Where they need a public IP you can either put that on the gateway
router and NAT to the private address, or statically set a public
address on the VM.
The CEPH interfaces ought to be physically separate or a separate VLAN,
and use only private IP's.
With 4 hosts I don't know of any reason to make it more complicated than
that.
On 12/29/2020 3:04 PM, Lewis Bergman wrote:
Bumping this as maybe it was too early for all the Proxmox geniuses to
see.
On Tue, Dec 29, 2020 at 7:36 AM Lewis Bergman <[email protected]
<mailto:[email protected]>> wrote:
Borg,
I have all the hardware in place now for a Proxmox cluster. The 4
HP servers each have the following:
2 EA 1G network - Purpose was for public access/management
2 EA 10G network - Purpose was for CEPH storage pools
iLO advanced
The servers will be plugged into two Cisco layer 3 switches in VSS
mode for redundancy and each like interface on LACP for redundancy
and increased bandwidth.
I am planning on getting a Proxmox support contract for at least
the first year but they say networking is beyond their scope.
I am asking if there is anyone on this list who feels they are
qualified to help design the network scheme of the cluster.
Bridged or routed, subnets, etc. All but a few of the VM's need to
have public IP's.
I want to avoid some basic mistake I might not realize until I am
months into the whole thing.
--
Lewis Bergman
325-439-0533 Cell
--
Lewis Bergman
325-439-0533 Cell
--
AF mailing list
[email protected]
http://af.afmug.com/mailman/listinfo/af_af.afmug.com