I added some code to my system to test writing data into Riak. I'm using the Python client library with protocol buffers. I'm writing a snapshot of my current data, which is one json object containing on average 60 individual json sub-objects. Each sub object contains about 22 values.
# Archived entry. ts is a formatted timestamp. entry = self._bucket.new(ts, data=data) entry.store() # Now write the current entry. entry = self._bucket.new("current", data=data) entry.store() I'm writing the same data twice; the archived copy and the current copy, which I can easily retrieve later. Performance is lower than expected; top is showing a constant cpu usage of 10-12%. I haven't decided to use Riak; this is to help me decide. But for now are there any optimisations I can do here? A similiar test with MongoDB shows a steady cpu usage of 1%. The cpu usages are for my client, not Riak's own processes. The only differences in my test app is the code that writes the data to the database. Otherwise all other code is 100% the same between these two test apps. Any suggestions appreciated. Thanks Mike _______________________________________________ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com