Software Engineer 979 wrote:
> 
> I'm currently developing an data transfer application using OpenSSL. The
> application is required to securely transfer large amounts of data over a
> low latency/high bandwidth network. The data being transferred lives in a
> 3rd part application that uses 1 MB buffer to transfer data to my
> application. When I hook OpenSSL into my application I notice an
> appreciable decline in network throughput. I've traced the issue the
> default TLS record size of 16K. The smaller record size causes the 3rd
> party application's buffer to be segmented into 4 16K buffers per write and
> the resulting overhead considerably slows things down. I've since modified
> the version of OpenSSL that I'm using to support an arbitrary TLS record
> size allowing OpenSSL to scale up to 1MB or larger TLS record size. Since
> this change, my network throughput has dramatically increased (187%
> degradation down to 33%).

I have strong doubts that the TLS record size is the problem here.

But I've seen higher layer design flaws before, which can cause
such problems.  The OpenSSL membio-interface seems to exhibit 
quite sub-optimal behaviour, because it moves data around like crazy
if you fill it with MBytes of data.

-Martin

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to