Hi,

I'm experimenting using key filters to implement indexes. My approach
is for each data key in bucket A, to create a new empty key in a
dedicated index bucket where the original key name and value of the
indexed field is encoded in the key name for a new index key.

Data key looks like this:

Bucket - riak_perf_test
Key - ccode_<unique_6_digit_ID> : {"redeemed_count": 23 }

For each data key created, an index key is created:

Bucket - idx=redeemed_count=ccode
Key - ccode/23

(in both keys 23 changes on a per-key basis based on what
"redeemed_count" is set to)


My goal is to be able to do a key filtered Map Reduce job on
idx=redeemed_count=ccode that generates a list of all data key names
with a redeemed_count < 50.

The job (using curl) is here: https://gist.github.com/852451

It errors out almost immediately in sasl-error.log
(https://gist.github.com/852450), but the request doesn't immediately
error out to the client. The only error the client sees is an eventual
timeout error.

So my question is, what is the error in sasl-error.log telling me is
wrong with my job construction? And also, why is there only a timeout
error generated to the client instead of a map_reduce_error as I've
seen for non-key filtered jobs?

Thank you in advance for any help. I greatly appreciate it.

-J

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to