If they would have rolled out 1000G networks now, I guess we will have to plug in 17 MTP interfaces ;)
HTH, Dan #13685 (RS/Sec/SP) The CCIE troubleshooting blog: http://dans-net.com Bring order to your Private VLAN network: http://marathon-networks.com On Thu, Sep 27, 2012 at 2:51 PM, Eugen Leitl <eu...@leitl.org> wrote: > > http://slashdot.org/topic/datacenter/terabit-ethernet-is-dead-for-now/ > > Terabit Ethernet is Dead, for Now > > by Mark Hachman | September 26, 2012 > > A straw poll of the IEEE's high-speed Ethernet group finds that 400-Gbits/s > is almost unanimously preferred. > > Sorry, everybody: terabit Ethernet looks like it will have to wait a while > longer. > > The IEEE 802.3 Industry Connections Higher Speed Ethernet Consensus group > met > this week in Geneva, Switzerland, with attendees concluding—almost to a > man—that 400 Gbits/s should be the next step in the evolution of Ethernet. > A > straw poll at its conclusion found that 61 of the 62 attendees that voted > supported 400 Gbits/s as the basis for the near term “call for interest,” > or > CFI. > > The bandwidth call to arms was sounded by a July report by the IEEE, which > concluded that, if current trends continue, networks will need to support > capacity requirements of 1 terabit per second in 2015 and 10 terabits per > second by 2020. In 2015 there will be nearly 15 billion fixed and > mobile-networked devices and machine-to-machine connections. > > The report goes on to predict that, from 2010 to 2015, global IP traffic > will > experience a fourfold increase from 20 exabytes per month in 2010 to 81 > exabytes per month in 2015, a 32 percent CAGR. Storage growth is expected > to > grow to 7910 exabytes in 2015, with over half of it accessed via Ethernet. > Of > course, one of the first places the new, faster Ethernet links will occur > will be in the data center. > > With that in mind, the IEEE 802.3 group began formulating a response. > However, virtually all attendees seemed to be in agreement before the > meeting > opened, as only one presentation focused on the feasibility of one-terabit > Ethernet, eventually concluding that 400 Gbits/s made more sense in the > near > term. > > Kai Cui and Peter Stassar from Huawei Technologies suggested that the most > cost-effective method for developing a 1-terabit Physical Medium Dependent > (PMD) would be to leverage today’s 100-Gbit technology, which isn’t yet in > high volume, and therefore not cost-optimized. “[The] cost target for 1Tb/s > needs to be at or below 100G cost/bit*sec and required R&D investments > should > be modest,” they wrote as part of their presentation. > > “100GbE technology based architecture would imply 40 lanes at 25G, which > clearly would imply impractically big packages and large amount of > interface > signals,” Cui and Stassar added, which would need to reduce the number of > electrical and optical interface lanes to enable a reasonable package size. > While alternative modulation formats could be used (5λx200G DP-16QAM, 4 > bits/symbol, 25G) “neither the multi-level nor the phase modulation format > based technologies have been demonstrated to be sufficiently mature to > justify usage in client PMDs towards 100Gb/s to 1Tb/s applications.” > > They concluded: “1Tb/s does seem a ‘bridge too far’ at least for the > coming 3 > to 4 years.” > > Chris Cole of optical components maker Finisar presented the case for a > 400-Gbit CFI, with backing from Brocade, Cisco, HP, IBM, Intel, Juniper, > and > Verizon, among others. > > Like Huawei’s Cui and Stassar, Cole indicated that 400-Gbit Ethernet can > reuse 100 GbE building blocks, and fits within the existing dense 100 GbE > roadmap. Faster data rates require “exotic” implementations, with higher > R&D > investments required and a longer time to market. “Data rates beyond > 400Gb/s > require an increasingly impractical number of lanes if 100GbE technology is > reused,” he said. > > 400 Gbit/s also makes more sense than a 4×100 Gb/s link aggregation, Cole > added, as fewer items promotes management efficiency. Individual link > congestion is also a concern: “Without faster links, [the] link count grows > exponentially, therefore management pain grows exponentially.” > > Cole suggested that a potential 400 Gb/s MAC/PCS ASIC could be fabricated > in > either 20- or 28-nm CMOS, using a 400-bit wide bus and a 1 GHz clock rate. > “There is a strong desire to reuse 802.3ba, 802.3bj, and 802.3bm technology > building blocks,” he said. > > That’s not to say that terabit Ethernet won’t be needed, Cole concluded, or > 1.6 terabit Ethernet, at that. The timeframes for those followon CFIs could > be between 3 to 6 years, he said. > > The CFI hasn’t formally occurred; until it does, nothing has been decided. > So > far, the most likely dates for formalizing the CFI will take place in > either > November or next month. But at this point, it looks like terabit Ethernet > is > a dead duck, at least for the near future. > >