Hi, all We have a benchmarking for openstack API, we found the performance is not very well. For instance, with 3000 flavors in database, get all flavors API TPS is 4 under 10 concurrency. With 1000 images in glance, get all images API TPS is roughly 3. The hardware is Intel(R) Xeon(R) E5640 @ 2.67GHz, 48G memory.
As we know the most time-consuming in source code is ORM, so an obvious solution is do cache in nova-conductor, but nova-api does not get objects from nova-conductor. The easy way I can think of is inserting a cache system in API, maybe a WSGI middleware in api-paste.ini for better reusability. Then we can insert cache system in any WSGI pipeline bases application, like glance, neutron, etc. The cache system is policy based. So different API can have different expiry. For example, if flavors not changes often, we can set expiry of get flavors API to a bigger number. And instances changes often, the expiry maybe 1 second. The backend of cache varies, memory based, file based, also a dummy. The key of cache is a mixture of URL, querystring, and headers, and only work for "GET", "HEAD" requests. Make sense? Or another solution? -- Best regards, TT Gao
_______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev