On 09/12/2012 09:30 PM, Jake Carroll wrote:
Hi all.
I saw a question on this the other day, and thought I'd ask my own
similar (but not the same) question.
We have 10GbE interconnects for all our cluster work within SGE/OGE.
Storage is provided via 10GbE TOR's served out over NFS to all nodes.
I assume "TOR" means "top of rack"?
We're wanting to squeeze what we can out of these switches and hardware.
So:
1. Should we be using jumbo-frame style MTU settings?
While there is usually a recommendation to do so, I've not seen any
measurable performance difference myself. I have seen lots of unusual
problems created by not having the same MTU across all Ethernet devices
on the segment, or forgetting to enable the jumbo frames on a switch or
the firewall blocking PMTU. So I think you'll just me making trouble
for yourself. I'd recommend you stick with defaults.
2. If so – what MTU do users generally recommend? 9000? Above 9000?
Depends on your hardware. E.g. I had some older 10G NICs that could
only do 8000 bytes max. Some switches can do ~10K. Double-check all
your manuals. Or better yet, just use regular frame size.
3. What of hardware flow control? Generally enable it, or are there
precautions/corner cases where it's not a sensible thing to do?
I've dinked with that setting but was not able to notice a measurable
difference. There are also implementation differences in how different
devices handle pause frames, e.g. IIRC some Cisco switches ignore them,
no matter the setting.
4. Are there any other simple network-changes/suggestions people have
that we could stand to benefit from?
When you're testing things, make sure you can saturate the switch. Some
older switches were 10G, but couldn't actually do 10G across all the
ports. I tend to just use iperf and see that I can get >9.9Gbps between
various endpoints, and between many endpoints simultaneously.
For reference – we're using the current ROCKS distribution (6.0, Mamba)
and Dell Powerconnect 10GbE TOR's/interconnects, coupled with Broadcom
CNA's in all our blade/node infrastructure.
Thanks!
--JC
--
Alex Chekholko [email protected]
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users