Hi guys,

We are currently benchmarking our Scality object server backend for Swift. We 
basically created a new DiskFile class that is used in a new ObjectController 
that inherits from the native server.ObjectController. It's pretty similar to 
how Ceph can be used as a backend for Swift objects. Our DiskFile is used to 
make HTTP request to the Scality "Ring" which supports GET/PUT/Delete on 
objects. 

Scality implementation is here : 
https://github.com/scality/ScalitySproxydSwift/blob/master/swift/obj/scality_sproxyd_diskfile.py

We are using SSBench to benchmark and when the concurrency is high, we see 
somehow interleaved operations on the same object. For example, our DiskFile 
will be asked to DELETE an object while the object is currently being PUT by 
another client. The Scality ring doesn"t support multi writers on the same 
object. So a lot of ssbench operations fail with a HTTP response '423 - Object 
is locked'.

We dive into ssbench code and saw that it should not do interleaved operations. 
By adding some logging in our DiskFile class, we kinda of guess that the Object 
server doesn't wait for the put() method of the DiskFileWriter to finish before 
returning HTTP 200 to the Swift Proxy. Is this explanation correct ? Our put() 
method in the DiskFileWriter could take some time to complete, thus this would 
explain that the PUT on the object is being finalized while a DELETE arrives. 

Some questions :
1) Is it possible that the put() method of the DiskFileWriter is somehow non 
blocking ? (or that the result of put() is not awaited?). If not, how could 
ssbench thinks that an object is completely PUT and that ssbench is allowed to 
delete it ?
2) If someone could explain me in a few words (or more :)) how Swift deals with 
multiple writers on the same object, that will be very much appreciated.

Thanks a lot,
Jordan


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to