It's just tradeoffs. Many of the benefits ( smaller failure domains, power savings , incremental expandability ) can be counterbalanced by increased operational complexity. >From my experiences, if you don't have proper automation/tooling for management/configuration and fault detection, it's a nightmare. If you do have those things, then the benefits can be substantial.
I think every network will have a tipping point in which such a model starts to make more sense, but at smaller scales I think fat chassis are still likely a better place to be. On Sat, Dec 21, 2024 at 9:51 AM Yan Filyurin <yanf...@gmail.com> wrote: > When you say distributed router fabrics, are you thinking OCP concept with > interconnect switch with ATM-like cell relay (after flowery speeches about > "not betting against Ethernet", or course)? > > https://www.youtube.com/watch?v=l_hyZwf6-Y0 > > https://www.ufispace.com/company/blog/what-is-a-distributed-disaggregated-chassis-ddc > > mostly advocated by Drivenets. It has been a while, but from what I > remember, the argument, and it has a lot of merit, is you can scale to a > lot bigger "chassis" than you could with any bigiron device. If you look > at Broadcom latest interconnect specs > https://www.broadcom.com/products/ethernet-connectivity/switching/stratadnx/bcm88920, > you can build a pretty big Pops, and while they are trying to appeal mostly > to AI cluster crowd, one could build aggregation services with that, or > something smaller and you get incremental scaling and possibly higher > availability, since everything is separated and you could even get enough > RPs for proper consensus. I admit, I have never seen it outside of lab > environment, but AT&T appears to like it. Plus all the mechanics of > getting through your fabric are still handled by the vendor and you manage > it like a single node. > > One could argue that with chassis systems, you can still scale > incrementally, use different line card ports for access and aggregation and > your leaf/interconnect is purely electrical, so you are not spending money > on optics, so it does not exactly invalidate chassis setup and that is why > every big vendor will sell you both, especially if you are not of AT&T > scale. > > There is of course the other design with normal Ethernet fabrics based on > Fat Tree or some other topology with all the normal protocols between the > devices, but then you are in charge of setting up, traffic engineering and > scaling those protocols. IETF has done interesting things with these > scaling ideas and some vendors may have even implemented them to the point > that they work. :) But "too many devices" argument starts creeping in. > > Yan > > > > On Fri, Dec 20, 2024 at 5:43 PM Mike Hammett <na...@ics-il.net> wrote: > >> I've noticed that the whitebox hardware vendors are pushing distributed >> router fabrics, where you can keep buying pizza boxes and hooking them into >> a larger and larger fabric. Obviously, at some point, buying a big chassis >> makes more sense. Does it make sense building up to that point? What are >> your thoughts on that direction? >> >> >> >> ----- >> Mike Hammett >> [ http://www.ics-il.com/ | Intelligent Computing Solutions ] >> [ https://www.facebook.com/ICSIL ] [ >> https://plus.google.com/+IntelligentComputingSolutionsDeKalb ] [ >> https://www.linkedin.com/company/intelligent-computing-solutions ] [ >> https://twitter.com/ICSIL ] >> [ http://www.midwest-ix.com/ | Midwest Internet Exchange ] >> [ https://www.facebook.com/mdwestix ] [ >> https://www.linkedin.com/company/midwest-internet-exchange ] [ >> https://twitter.com/mdwestix ] >> [ http://www.thebrotherswisp.com/ | The Brothers WISP ] >> [ https://www.facebook.com/thebrotherswisp ] [ >> https://www.youtube.com/channel/UCXSdfxQv7SpoRQYNyLwntZg ] >> >