This has of course come up before - there was an energetic discussion on this 
list back in May 2001, and then again in August of that year. Eric Rescorla was 
one of the participants (as was I).

And the answer has always been that given the miniscule performance gain,[1] 
and portability issues for platforms that don't have scatter/gather I/O, no one 
has been motivated to implement it. The OpenSSL core team have better things to 
do; and clearly no one else has found it sufficiently rewarding to implement 
it, submit a pull request, and advocate for its inclusion in the official 
distribution.

OpenSSL is source, after all. There's nothing to stop anyone from adding 
SSL_writev to their own fork, testing the result, and submitting a PR.

Regarding the "many temporary buffers" problem - traditionally this has been 
solved with a buffer pool, such as the BSD mbuf architecture. A disadvantage of 
a single buffer pool is serialization for obtaining and releasing buffers; that 
can be relieved somewhat by using multiple buffer pools, with threads selecting 
a pool based on e.g. a hash of the thread ID. That gives you multiple, smaller 
lock domains.


[1] Yes, "wasted" cycles are wasted cycles. But by Amdahl's Law, optimizing a 
part of the system where performance is dominated by other considerations that 
are two or more orders of magnitude larger can never gain you even a single 
percentage point of improvement. Is it really that useful to improve your 
application's capacity from, say, 100,000 clients to 100,100? What's the value 
of that relative to the cost of implementing and testing a new API?

--
Michael Wojcik
Distinguished Engineer, Micro Focus


Reply via email to