Hi Shrinand, In HTTP1.0, client is required to add an additional header to the request so that the server kept the connection reused by multiple HTTP requests. Connection: Keep-Alive In HTTP1.1, all connections are considered persistent unless declared otherwise. The keepalive message is not used separately. I believe Swift is using the HTTP 1.1 protocol. However, the HTTP server has default time out settings.
In Swift proxy server, the default connection timeout is controlled in config file as below. You can try to look deeper inside of code how it is used. # node_timeout = 10 # client_timeout = 60 # conn_timeout = 0.5 # post_quorum_timeout = 0.5 I'm also developing a trace tool for Swift here. You can try it and see if it can provide you more insight of this question. After you enable it, you can record the info into the logs that interests you such as remote client ip and port in each request. (currently this info is not recorded by default. you need to add it in the code. Or you can tell me what you need so that I update this patch.) Aggregate those logs and see how many ports on each ip/node are used at specific time slots. This is what I know. Hope this information is useful for you. -Edward Zhang Shrinand Javadekar <shrinand@maginat To ics.com> "openstack@lists.openstack.org" <openstack@lists.openstack.org> 2014-07-10 上午 cc 01:37 Subject Re: [Openstack] [Swift] Running out of ports or fds? Any ideas folks? On Tue, Jul 8, 2014 at 4:26 PM, Shrinand Javadekar <shrin...@maginatics.com> wrote: > Hi, > > I have a question about the http connections made between the various > swift server processes. Particularly between the swift proxy server > and the swift object server. > > I see that these servers do not use a persistent http connection > between them. So every blob get/put/delete request will create a new > connection, use it and tear it down. In a highly concurrent > environment with thousands of such operations happening per second, > there could be two problems: > > i) Time required for creating the new connections could hamper performance. > ii) After the requests are complete, several connections will be in > the TIME_WAIT state and it might be possible that the proxy server and > object server node will run out of ports or fds. > > If the proxy and object servers are on the same machine, the problem > is exacerbated. I have one such instance and at one point saw ~30K > sockets in the TIME_WAIT state. Though, this would've included > connections with the account server and container server also. > > Does this analysis make sense? If yes, are there ways to do something > about it (other than asking clients to slow down :P)? > > Thanks in advance. > -Shri _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
_______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack