Marek,

You're definitely butting up against the limits of Riak's data model here.
 It sounds like you are looking for something like Redis?  E.g. add element
E to set S.  You could probably use statebox [1] and pull some tricks in
the precommit hook to determine if the incoming client object is the
initial object or a delta but you would still need to perform a read in the
hook to get the box, apply the delta, and then return that object from the
hook.  As you said this happens on the coordinator and there is no easy way
to guarantee this op applies local to the data off the top of my head.  At
least a precommit hook would prevent streaming the object to/from the
client.

I think what you really want is native support for different data models on
top of Riak.  Internally, we've played around with a CRDT [2] interface on
top of Riak where instead of sending objects you send operations.  This is
analogous to how Redis works (although the underlying implementation is
very different given Riak's distributed nature).  The trick with a new data
model is making sure it scales and making sure we understand its
consistency model.

Riak's pedigree is the key-value model.  I'm sure you could hack something
together on top of Riak's KV model with precommit hooks but you will be
swimming upstream.  That said, we're always looking at new models that
would compliment our existing key-value model.

-Ryan

[1]: https://github.com/mochi/statebox

[2]: http://hal.archives-ouvertes.fr/inria-00555588/

On Wed, Jan 4, 2012 at 10:11 AM, Marek Zawirski <marek.zawir...@lip6.fr>wrote:

> Hi,
>
> we are trying use Riak as a storage layer for experimental
> higher-level data types updated by clients, using a set of
> well-defined operations. To this end, each data type instance is
> stored under a single key. One problem with this approach is that
> after client modifies even a small piece of the data structure, it
> needs to write (and transfer) the whole data structure back to Riak.
> We are looking for a way to reduce this overhead by sending just a
> delta operation, preferably without partitioning the data structure to
> several keys.
>
> One approach we thought about is to perform operations on Riak-side
> using pre-commit hooks or similar technique. I.e. reconstruct the new
> value on Riak using original old value + delta send by client. The
> operations (deltas) we are talking about have necessary properties to
> ensure convergence. Still, it seems there a couple technical issues
> involved, we are looking how to solve them:
> 1) pre-commit API seems to only offer access to the object value
> passed by client during write and not the old value; I wonder - am I
> able to read the value from the store in pre-commit hook? In
> particular, the value previously read by the client writing,
> identified by version vector?
> 2) pre-commit hooks are executed on the coordinator node; is there an
> easy way in Riak to apply operations at data nodes instead?
>
> Thanks for any info that can help addressing these issues.
>
> Regards,
> Marek Zawirski
>
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to