Hi Max,

I think if you run decent Recent servers with Intel NICS  (not virtualised)
you can get those numbers,   We use Hotlava Multiport 10GE  Nics (Intel)

the only thing with running software only routers is what head room you
may have for multiple Connections etc...
we have similar traffic levels to your self and hope to grow them ...


what we did to get  was to Use OpenBSD As  Multi-hop  Edge Routers
which also act as iBGP route reflectors...  these take the Feeds from
2x Transits
and then inject a subset of routes + Default to a pair of Trident-II
based Switches
(ours happen to be arista) these switches BGP peer directly with the exchange

what is nice about the above setup... is the 2x L3 switches are doing
the heavy lifting
interms of packet  forwarding ... but OpenBSD  +OpenBGPD  are injecting routes
into the two Layer 3 Switches via IBGP....so im using openBSD to do
the Control Plane...
but my L3 Switches are the forwarding plane (sort of )

This has been running in our production for about  month  and seems
very performant
and stable ...
here is a video about a guy talking about using OpenBSD  in a multi 10G setup
https://www.youtube.com/watch?v=veqKM4bHesM

Hope this helps,...





On Tue, 18 Dec 2018 at 23:32, Max Clark <max.cl...@gmail.com> wrote:
>
> Hello,
>
> I've been presented with an opportunity to greatly simplify upstream
> networking within a datacenter. At this point I'm expecting to condense
> down to two 10 Gbps full feed IPv4+IPv6 transit links plus a 10 Gbps link
> to the peering fabric. Total 95th percentile transit averages in the 3-4
> Gbps range with bursts into the 6-7 Gbps (outside of the rare DDoS then
> everything just catches on fire until provider mitigation kicks in).
>
> With the exception of the full tables it's a pretty simple requirement.
> There's plenty of options to purchase a new TOR device(s) that could take
> the full tables, but I'd just rather not commit the budget for it. Plus
> this feels like the perfect time to do what I've wanted for a while, and
> deploy an OpenBSD & OpenBGPD edge.
>
> I should probably ask first - am I crazy?
>
> With that out of the way I could either land the fiber directly into NICs
> on an appropriately sized server, or I was thinking about landing the
> transit links on a 10 Gbps L2 switch and using CARP to provide server
> redundancy on my side (so each transit link would be part of VLAN with two
> servers connected, primary server would advertise the /30 to the carrier
> with BGPD, and secondary server could take over with heartbeat failure). I
> would use two interfaces on the server - one facing the Internet and one
> facing our equipment.
>
> Would the access switch in this configuration be a bad idea? Should I keep
> things directly homed on the server?
>
> And my last question - are there any specific NICs that I should look for
> and/or avoid when building this?
>
> Thanks!
> Max



-- 
Kindest regards,
Tom Smyth

Mobile: +353 87 6193172
The information contained in this E-mail is intended only for the
confidential use of the named recipient. If the reader of this message
is not the intended recipient or the person responsible for
delivering it to the recipient, you are hereby notified that you have
received this communication in error and that any review,
dissemination or copying of this communication is strictly prohibited.
If you have received this in error, please notify the sender
immediately by telephone at the number above and erase the message
You are requested to carry out your own virus check before
opening any attachment.

Reply via email to