Riak Recap for 8/10 - 8/11

2010-08-12 Thread Mark Phillips
Afternoon, Evening, Morning to all, For today's recap: a great intro blog post on Post Commit Hooks, more Andy Gross, a tweet worthy of many retweets, and some great IRC conversations curated and Gisted for your perusal. Enjoy! Mark Community Manager Basho Technologies wiki.basho.com twitter.co

Re: to escape or not to escape?

2010-08-12 Thread Bryan Fink
On Wed, Aug 11, 2010 at 3:31 PM, francisco treacy wrote: > I have a doubt regarding links and URI-escaping in Riak. …snip… > This shows that in the process of escaping/unescaping there are cases > in which link-walking (and map/reduce for that matter) don't work. > Riak should probably never unesc

Re: riak deployment

2010-08-12 Thread Alexander Sicular
- You can not tell riak where to place buckets. - You could set the N val on a bucket to one, and you should in the case of your 'big bucket'. Otherwise you will get N replicas on the same physical host. -Use linode. 512 > 256 = better. But in reality , your use case doesnt mesh well with what ri

Re: keys in bitcask

2010-08-12 Thread Alexander Sicular
Is there a new metric/rule of thumb/guide to use when calculating mem requirements? I think before this it was something like 40bytes + key size per key. -Alexander On Aug 7, 2010, at 11:25 PM, Sean Cribbs wrote: > Dave Smith already reduced memory usage by 40% this past week, simply by > cha

Re: to escape or not to escape?

2010-08-12 Thread Dan Reverri
Hi Francisco, You are correct; Riak is URL decoding Link headers in riak_kv_wm_raw:get_link_heads/2. I've opened bug 617 to address this issue: https://issues.basho.com/show_bug.cgi?id=617 Thanks, Dan Daniel Reverri Developer Advocate Basho Technologies, Inc. d...@basho.com On Wed, Aug 11, 201

riak deployment

2010-08-12 Thread Orlin Bozhinov
I can easily wait for Riak Search to do this http://groups.google.com/group/mongodb-user/browse_thread/thread/c2563a8566591a30/b3d19f21675a899e - instead of mongodb. Does this deployment I have in mind make sense: I'll get a medium (or large) Linode box for the big dataset bucket. Hopefully