Re: Indexing of intermediate nested fields in Riak search

2011-10-30 Thread Elias Levy
On Sat, Oct 29, 2011 at 9:59 PM, Elias Levy wrote: > I am wondering if Riak search can index intermediate nested fields. When > indexing json data through the KV precommit hook, the underscore is > understood in the schema as indicating nesting. Thus, foo_bar will index > the value "bah" of fiel

Re: atomically updating multiple keys

2011-10-30 Thread Alexander Sicular
Greetings Justin, IMHO, AFAIK, IANAL, etc. "is it ok?" really boils down to whatever you're ok with, can program, can understand, is within you're budget, can implement and/or do all of the above within your own timeline. I think it is always true that the number of opinions you will get is more t

Re: atomically updating multiple keys

2011-10-30 Thread Les Mikesell
On Sun, Oct 30, 2011 at 10:43 AM, Alexander Sicular wrote: > Greetings Justin, > > IMHO, AFAIK, IANAL, etc. "is it ok?" really boils down to whatever you're ok > with, can program, can understand, is within you're budget, can implement > and/or do all of the above within your own timeline. I think

Re: atomically updating multiple keys

2011-10-30 Thread Aphyr
One easy way to solve an atomic a->b relationship in an eventually consistent way is to require the existence of both A and B in order to consider the write valid, to write A first, and use a message queue to retry writes until both A and B exist. There are other approaches for agreement betwee

Re: Python Client Protocol Buffers Error

2011-10-30 Thread Jim Adler
Attached is the app.config and here's the code snippet: import riak client = riak.RiakClient(host='xxx.xxx.xxx.xxx', port=8087, transport_class=riak.RiakPbcTransport) query = riak.RiakMapReduce(client).add('nodes') query.add_key_filters(riak.key_filter.tokenize('-', 2) + riak.key_filter.starts_

Re: atomically updating multiple keys

2011-10-30 Thread Justin Karneges
Yes, this is one I just thought of and was going to ask if it made sense. In order to write A first, you'd need B's key id to be generated outside of Riak (e.g. client generated UUID or snowflake or something). Then you can use a job queue with retries that can simply mash the writing of B unt

Race condition reading objects

2011-10-30 Thread Elias Levy
I am finding that there appears to be some sort of race condition when reading recently written objects (as in concurrently). I am using Riak 1.0.0 with the leveldb backend through the multi backend in a 3 node cluster. Writes are done with W=2 and reads with R=2. The client is using the riak cl