To all,

I am running a Windows based high performance computing application that uses 
"reliable" multicast (29West) on a gigabit LAN. All systems are logically on 
the same VLAN and even on the same physical switch The application is set to 
use an 8k buffer and therefore results in IP fragmentation when datagrams are 
transmitted. The application is sensitive to any latency or data loss (of 
course) and uses a proprietary mechanism to create TCP-like retransmissions in 
case there is any actual data loss. Unfortunately, becasue of the fragmentation 
during the retransmission window all ip fragments must be resent even though 
only one may have been lost.

If the buffer size is tweeked to the ~1460 this may fix the fragmentation but 
will the side effects be less throughput and possibly more latency. Is there a 
sweet spot for UDP on an ethernet segment? 

Philip



      

Reply via email to