Hello,

I've been presented with an opportunity to greatly simplify upstream
networking within a datacenter. At this point I'm expecting to condense
down to two 10 Gbps full feed IPv4+IPv6 transit links plus a 10 Gbps link
to the peering fabric. Total 95th percentile transit averages in the 3-4
Gbps range with bursts into the 6-7 Gbps (outside of the rare DDoS then
everything just catches on fire until provider mitigation kicks in).

With the exception of the full tables it's a pretty simple requirement.
There's plenty of options to purchase a new TOR device(s) that could take
the full tables, but I'd just rather not commit the budget for it. Plus
this feels like the perfect time to do what I've wanted for a while, and
deploy an OpenBSD & OpenBGPD edge.

I should probably ask first - am I crazy?

With that out of the way I could either land the fiber directly into NICs
on an appropriately sized server, or I was thinking about landing the
transit links on a 10 Gbps L2 switch and using CARP to provide server
redundancy on my side (so each transit link would be part of VLAN with two
servers connected, primary server would advertise the /30 to the carrier
with BGPD, and secondary server could take over with heartbeat failure). I
would use two interfaces on the server - one facing the Internet and one
facing our equipment.

Would the access switch in this configuration be a bad idea? Should I keep
things directly homed on the server?

And my last question - are there any specific NICs that I should look for
and/or avoid when building this?

Thanks!
Max

Reply via email to