Operate on the data locally, validating the decryption process as a final step after the re-encrypted value is put back into the db.
Also, you don't have to do it all in one step. Pull a list of keys down, break them up, and test your batch job on a small portion. If you're concerned with data loss, ensure you haven't lost any before you delete the updated value locally. The most efficient way to have set up your map would be a bucket to map the keyname to the thing it's encrypted. Alternatively, you could have added a keys bucket that uses Links which would relate the key name to the thing that was encrypted by it. Lastly, it seems strange that your concerns with data loss are related to how you'll be pulling the list of keys which need updated. They really shouldn't be related. -m From: riak-users [mailto:riak-users-boun...@lists.basho.com] On Behalf Of Ron Pastore Sent: Monday, November 11, 2013 9:42 AM To: riak-users@lists.basho.com Subject: aggregate query Hi All, I posted this question to Stack Overflow a few days back but not much luck. Hoping someone here has some thoughts. I have a use case for an aggregate query across the entire db and all buckets, I'm wondering the best query method to use, leaning towards multiple secondary index calls. This won't be a frequently used feature, possibly invoked once a week or so via scheduled job or something. Some records have a value in their meta attribute that I'd like to match/target for the selection. After the selection I'll need to update those records. >From what I've read, secondary index looks great but it is limited to a single bucket? I also saw "list buckets", which has warnings about production use, though not sure if that's applicable to such infrequently used functionality. Thought maybe i could list buckets then perform the secondary index query on each. Is there a better way? MapReduce seems heavy, having to load every KV off the file system. Search seem possible too but index setup/maintenance seems overkill if there's an easier way. UPDATE: i went ahead with a Search index but am now second guessing that. This lookup will be part of an encryption key rotation, where we'll be finding certain values from Riak that are encrypted with a given key then re-encrypting with a new key. So, if there are discrepancies or failed operations between the actual encrypted values and the search index, there is a potential for data loss, as we'll be discarding keys once rotated. Sorry for the long winded description. Any help would be greatly appreciated.
_______________________________________________ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com