Re: nodes with 100% HD usage

2015-04-13 Thread Alex De la rosa
Awesome! thanks Bryan, that's exactly what I wanted to know. Monitoring that all nodes are below 80% of capacity and add nodes when reaching those limits to rebalance data and free space on this nodes seems the right way to go then : ) Thanks, Alex On Mon, Apr 13, 2015 at 1:16 PM, bryan hunt wr

Re: object sizes

2015-04-13 Thread Alex De la rosa
Hi Bryan, Thanks for your answer; i don't know how to code in erlang, so all my system relies on Python. Following Ciprian's curl suggestion, I tried to compare it with this python code during the weekend: Map object: curl -I > 1058 bytes print sys.getsizeof(obj.value) > 3352 bytes Standard obj

Re: nodes with 100% HD usage

2015-04-13 Thread bryan hunt
Result - Failed writes, reduced AAE availability, system errors, probably other (OS level) processes terminating. 100% disk usage is never good. However, our storage systems are write-append, which will mitigate against data corruption. If the node becomes completely unavailable, the other node

Re: object sizes

2015-04-13 Thread bryan hunt
Alex, Maps and Sets are stored just like a regular Riak object, but using a particular data structure and object serialization format. As you have observed, there is an overhead, and you want to monitor the growth of these data structures. It is possible to write a MapReduce map function (in

Java client: ConflictResolver for RiakObject, how to get the key?

2015-04-13 Thread Henning Verbeek
I'm in the process of migrating my code from Riak 1.4 to Riak 2.0. In Riak 2.0, I'm storing binary data as a RiakObject: RiakObject obj = new RiakObject(); obj.setContentType(CONTENT_TYPE); obj.setValue(BinaryValue.create(someByteArray)); StoreValue op = new StoreValue.Builder(obj) .with