On Mon, Jan 20, 2014 at 09:18:02AM -0800, James Peach wrote:
> On Jan 20, 2014, at 8:27 AM, Alex Garz?o <alex.gar...@azion.com> wrote:
> 
> > Hello,
> > 
> > Are any plans to support TCP Fast Open (TFO) in ATS?
> 
> I'm not aware of anyone currently working on this.
> 
> On my reading of the kernel docs, you can enable TFO on all listen sockets 
> automatically; that should let you experiment without any ATS changes. If you 
> want ATS to set the TCP_FASTOPEN socket option, that's a pretty easy change; 
> you'd want to plumb a configuration variable to enable that though. Could you 
> file a ticket at https://issues.apache.org/jira/browse/TS?
> 
> J

I was meaning to say something about this the other day, but I was 
sleep-deprived, and then didn't get around to it. So I'll give a brain dump now 
rather than stay quiet.

I've already played around with TCP Fast Open a bit but not in TrafficServer.  
I've got as far as modifying wget and my own microcurl implementation to 
support sending TCP Fast Open
requests and modifying the lighttpd web server to accept TCP Fast Open requests.

I ran into an issue with kernel headers not having support for TCP Fast Open - 
and manually kludging.  I also checked around a bit, and couldn't find any 
servers other than a few google
ones that support TCP Fast Open.  www.google.com, www.gmail.com being prime 
examples.  Chrome/chromium are existing clients around, but obviously not as 
easy to do controlled testing
from.

When using Google Chrome it's quite obvious the difference in performance even 
just viewing Cacti graphs remotely.  But sometimes requesting files seemed to 
not be any faster - I am
suspecting that there could be some kind of shaping systems that expecting the 
bandwidth usage to slowly go up rather than the initial reponse to be a burst 
of packets - but I didn't
look into it further, and it didn't seem to be all the time.

I didn't come across anything about accepting all listen sockets automatically, 
but that sounds interesting.  From what I gather most problems are dictated by 
the client, the primary
concern being that PUT/POST requests could double-up.  From what I gathered 
last time Chrome hasn't been fixed yet to support not using TCP Fast open for 
such requests, and also hasn't
been enabled for using TCP Fast Open for explicit proxies.

Unfortunately my current development environment involves using an explicit 
proxy over a local area network connection which won't benefit much from 
server-side tcp fast open, and would
mean kludging the data to take another path to use transparent proxying still 
to little advantage other than testing that it's not breaking.  And so I'm more 
interesting in looking to
get initiating TCP Fast Open connections going rather than hosting them.

Which means that the other problem comes up, which means you have to make sure 
the data path can easily support initating connections whilst already having 
the data of the request to
make.  I haven't looked into TrafficServer deep enough to see how hard this is 
but I jumped from curl to wget as a test client, as wget made it easier. 

The listening part is basically simple - an optimisation can be done to see if 
the request is already available when the TCP connection is accepted, but even 
without that, an extra go
about on a select/poll loop will still lead to better performance than no tcp 
fast open support.  And as far as setting the socket option, I'd like to see an 
arguement per socket just
to try enabling it with a fallback path.  

It'd also be useful to see support in http_load.  I supppose starting with 
global enable on tcp fast open enable, and then adding support to http_load 
would be a good starting point.

Ben.

Reply via email to