So Stephen (presented the perf work at the last hackathon) had an idea about maybe saving on some connections via batching up groups of container updates before making the connection to the container server in async_update(). Clearly this would delay updates by some amount but they're not immediately processed at the other end anyway until the pending file at the other end hits PENDING_CAP right? Could easily be missing something big in the logic but it seems like batching a dozen or waiting a few secs as a trigger could save tons of connections. Thoughts?
-Paul -----Original Message----- From: John Dickinson [mailto:m...@not.mn] Sent: Friday, July 11, 2014 11:38 AM To: Shrinand Javadekar Cc: openstack@lists.openstack.org Subject: Re: [Openstack] [Swift] Running out of ports or fds? As Pete mentioned, since Swift can use a lot of sockets and fds when the system is under load. Take a look at http://docs.openstack.org/developer/swift/deployment_guide.html#general-system-tuning for some sysctl settings that can help. Also, note that if you start Swift as root (it can drop permissions), it will set the system limits for file descriptors. You may need to use ulimit to increase the number of fds available. --John On Jul 11, 2014, at 11:18 AM, Shrinand Javadekar <shrin...@maginatics.com> wrote: > Thanks for your inputs Edward and Pete. I'll set sysctl > net.ipv4.tcp_tw_reuse. > > On Fri, Jul 11, 2014 at 8:05 AM, Pete Zaitcev <zait...@redhat.com> wrote: >> On Tue, 8 Jul 2014 16:26:10 -0700 >> Shrinand Javadekar <shrin...@maginatics.com> wrote: >> >>> I see that these servers do not use a persistent http connection >>> between them. So every blob get/put/delete request will create a new >>> connection, use it and tear it down. In a highly concurrent >>> environment with thousands of such operations happening per second, >>> there could be two problems: >> >> It's a well-known problem in Swift. Operators with proxies driving >> sufficient traffic for it to manifest set sysctl net.ipv4.tcp_tw_reuse. >> >> There were attempts to reuse connections, but they floundered upon >> the complexities of actually implementing a connection cache. >> Keep in mind that you still have to allow simultaneous connections to >> the same node for concurrency. It snowballs quickly. >> >> -- Pete > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack@lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack