Re: Multiport NICs and ether channel?
On Sat, 17 Feb 2001, Willis L. Sarka wrote: > Greetings, > > Just a general question or two.. Please point me to a URL or tell me where > to RTFM, or answer back ;-). > > What is the status/condition of using muliport NICs and bonding > them together to form a larger pipe (i.e. a quad channel ethernet card for > an Intel box, bonding all four interfaces together to get a theoretical > 400Mbps pipe)? Are there any highly recommended cards of this type? Will > the bonding work when connected to a Cisco catalyst switch with ether > channel? Linux bonding is compat with Sun EtherTrunking and Cisco EtherChannel/FastEtherChannel. On the Cisco side you follow their setup examples, *except* you *must* trun keepalives off on the cisco. These are a Cisco extension. If you fail to do this the Cisco will toggle the onterfaces *off* every 10 - 30 seconds. --- The roaches seem to have survived, but they are not routing packets correctly. --About the Internet and nuclear war. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] hashed device lookup (Does NOT meet Linus' sumission
On Sun, 7 Jan 2001, Alan Cox wrote: > > Um, what about people running their box as just a VLAN router/firewall? > > That seems to be one of the principle uses so far. Actually, in that case > > both VLAN and IP traffic would come through, so it would be a tie if VLAN > > came first, but non-vlan traffic would suffer worse. > > Why would someone filter between vlans when any node on each vlan can happily > ignore the vlan partitioning Think VLANing switch clusters. Say 4 switches connected by GigE on 4 floors or in 4 separate building. Now, across these switches 20 VLANS are running, with the switches enforcing VLAN partitioning. The client PCs know nothing about it, as each one resides within a single VLAN. Now we have our Linux box with 2 x 100Mbit FD links to the switch cluster running 10 VLANS per interface, and an external DS1/SDSL/whatever connection. We now have 20 separate zones with different security controls per zone, with per switchport control over who resided in what group. Or even forget the routing and just plugging a Linux box to a companies 200VLAN setup to provide DHCP/whatever. I must say, I *hate* VLANs for this use, it is a horrible thing to do that wastes massive amounts of bandwidth on simulating a local broadcast domain across a much larger area, but oh well. As long as we have stupid managers and brain dead sales persons not much will change. Are there better things to do than VLAN? YES! Will we get stuck with needing VLANs in the real world? YES! --- The roaches seem to have survived, but they are not routing packets correctly. --About the Internet and nuclear war. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: Bonding Driver Questions
On Sun, 24 Sep 2000, Constantine Gavrilov wrote: > Hi, I'd like to use channel bonding driver for high availability. > > Currenly the bonding driver does not detect a dead slave link. When a > slave link dies, it causes lots of network retransmits and the effective > speed of the bonding device drops to almost zero. This has been verified > in the lab. > > How difficult would it be to "teach" the bonding driver to check for the > link status of its slave interfaces? Does ethernet layer provides a > uniform way to check for the link status or it is adapter dependent? Ciscos solution to this involves 'KeepAlive' packets. They default to on for a FastEtherChannel link, and each sub-interface sends then every 10 seconds (default, configable), and expects to receive them as well. If it does not it takes down the sub interface until keepalives return. Unfortunately I have not been able to find any documentation on the Cisco FastEtherChannel KeepAlive protocol, it would be nice to add this to Linux Bonding. Its kind of a dirty fix, but it would fix link state issues, and be compat with Cisco EtherChannel, Sun EtherTrunking, and the Adaptec DuraLan bonding systems. --- As folks might have suspected, not much survives except roaches, and they don't carry large enough packets fast enough... --About the Internet and nuclear war. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/