If you do use a port channel, keep in mind that you're still only going to get 1Gb (or whatever the speed of one component link in the LAG is) to any given host, and for any given single conversation. In other words, you're not going to get a single stream of data from host to storage (or host to host) to go any faster than 1Gb due to the hashing. Even round robin outbound requires the switch to select an inbound port to the destination, so unless you're doing large-scale fan-in or fan-out, it's not really that useful.
-Adam On Tue, Dec 6, 2016 at 4:44 PM, Ted Cabeen <[email protected]> wrote: > On 12/6/2016 1:09 PM, Jesse Becker wrote: > >> I currently get 72+MB write and 85+MB read from the NAS >>> >> You aren't going to get more than ~100 MB/sec out of a 1G link under >> real-world conditions. Granted, going from 72MB/sec to 100 MB/sec is >> a 38% improvement, but don't expect anything more. Now, if latency is >> the problem, instead of throughput, that's a different issue. >> > > Yep, the overhead is the big factor. You're not going to get much more > than a 20-30% improvement, regardless of the drives you add. I have a > multi-drive flash-cache-backed NAS with a 10Gbit up-link, and from a 1 > Gigabit-Ethernet connected client, the fastest I can pull data is 97 MBps > over CIFS. > > If you really need more speed, I'd recommend upgrading your Drobo and PCs > to have multiple Gigabit Ethernet ports, and bonding together those ports > to create a 2Gbps link for each device. > > --Ted > > > _______________________________________________ > Discuss mailing list > [email protected] > https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss > This list provided by the League of Professional System Administrators > http://lopsa.org/ >
_______________________________________________ Discuss mailing list [email protected] https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss This list provided by the League of Professional System Administrators http://lopsa.org/
