Re: confused

2010-09-16 Thread Grant Schofield
On Sep 15, 2010, at 2:40 PM, Nils Petersohn wrote: > hello, > > i was setting up 9 riak instances: > > three on my mac with the appropriate app config > and six with two virtual machines on a different computer. > > all 8 joined the d...@192.168.1.20 > and the join request was sent. > > after

Limit on number of buckets

2010-09-16 Thread SKester
Is there a practical (or hard) limit to the number of buckets a riak cluster can handle? One possible data model we could use for one application could result in ~80,000 buckets. Is that a reasonable number? Thanks, Scott ___ riak-users mailing list r

Re: Limit on number of buckets

2010-09-16 Thread Sean Cribbs
Scott, There is no limit on the number of buckets unless you are changing the bucket properties, like the replication factor, allow_mult, or the pre- and post-commit hooks. Buckets that have properties other than the defaults consume space in the ring state. Other than that, they are essentia

Re: Limit on number of buckets

2010-09-16 Thread Alexander Sicular
There is no limit to the amount of buckets a cluster can handle. The only consideration I know of is when using non default bucket properties (like bucket specific N vals). The reason being that non default values are chatted around the cluster in the gossip channel. -Alexander @siculars

Re: Limit on number of buckets

2010-09-16 Thread Scott
Thanks for the quick replies Sean and Alexander.  One of our current products allows users to sign up for weather alerts based on their zip code.  When we receive a weather alert for a set of locations, we need to quickly find all users in the zip codes effected. We currently do this with a sim

Re: Limit on number of buckets

2010-09-16 Thread Sean Cribbs
Listing keys in a bucket is not necessarily going to be faster than storing the list in an object. You might want to measure this to be sure - be aware that list-keys is bound by the total number of keys in the cluster, not by the amount in the bucket. Sean Cribbs Developer Advocate Basho Tec

riak not starting properly

2010-09-16 Thread Richard Heycock
Over the last few weeks I've been finding it harder and harder to start riak which given that it's running on an auto-provisioned ec2 instance is a bit of an issue! I can generally restart it by running /etc/init.d/riak restart but it's got to the stage where I have to run it four or five times. I

Re: confused

2010-09-16 Thread Nils Petersohn
ok, my ring seems ok now. what i did was to change the rel/vars/dev[1,2,3]_vars.config file. in there i was just replacing the ips... this reip thing did not really work out ... here is my riak ring now: (d...@192.168.0.100)1> riak_core_ring_manager:get_my_ring(). {ok,{chstate,'d...@192.168.0.100'

Re: confused

2010-09-16 Thread Grant Schofield
I think the slowness is coming from the older list keys implementation in 0.12.1, list keys has been changed in the tip version of Riak and is quite a bit faster now. In addition there have been a lot of improvements to the Javascript map reduce implementation that should help the speed of your

badarg ets delete

2010-09-16 Thread Michael Colussi
Hey guys I have an application using Riak 0.12 that does puts, gets, and updates. It works fine but I get these random error reports in my logs. Any ideas? ERROR <0.149.0> ** Generic server <0.149.0> terminating ** Last message in was stop ** When Server state == {state,139315} ** Reason for ter

Re: badarg ets delete

2010-09-16 Thread Andy Gross
Hi Michael, These errors are almost certainly harmless and being thrown when empty, non-owned vnodes get shut down. It appears that in some cases, the underlying ets table might already be deleted/GC'd by the time BackendModule:stop tries to explicitly delete it. I've opened this bug to track th

Re: Limit on number of buckets

2010-09-16 Thread Alexander Sicular
Hi Scott, Until Riak gains the ability to constrain list traversals by bucket this will continue to be a point of friction. This issue has been broached before and there are tickets open on the issues tracking site. As I understand it, one solution would potentially modify bitcask to open a 'ca