multi-get
Hi, A while ago, this issue was opened to track supporting multi_get https://issues.basho.com/show_bug.cgi?id=96 ie, in a single HTTP get request, you pass a list of 20 keys, riak returns all 20 values in a single response. I didn't see any movement on that. so is there a way to "fake" it using map reduce? Thanks, -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak Java Client
We use the java client, mostly in groovy. In terms of wish list 1: Protocol buffer support, this is addressed in the 0.14 version. 2: make the httpconnection max client setting work. I couldn't get this to work no matter what I do. It always only open 1 connection. 3: support no-data flag. I noticed in python client, you can pass a no-data returned flag when doing a PUT to riak. I can't find the same thing in java. This would be very useful in bulk upload situation. 4: it'd be nice if we can write map/reduce functions in groovy. :) On Wed, Feb 16, 2011 at 10:22 PM, David Smith wrote: > We're also working on putting together a comparison with the Ripple > API (h/t to Sean) and using that to improve/expand the Java API > accordingly. The intent is that all APIs should be idiomatic to > languages, so it won't be a straight port necessarily. > > Short answer -- we're working on it and would love your input. :) -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak Java Client
Excellent, I'll give that a shot. Thanks, On Sun, Feb 20, 2011 at 4:00 PM, Russell Brown wrote: > > When you store using RiakClient.store(RiakObject, RequestMeta) you can set up > RequestMeta with arbitrary query params like > > RequestMeta requestMeta = RequestMeta.writeParams(3, > 3).setQueryParam(Constants.QP_RETURN_BODY, Boolean.toString(true)); > > then > > myClientInstance.store(myRiakObject, requestMeta); > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
tuning riak for 100s of concurrent connections
Are there any guidelines/howtos on tuning riak nodes for 100s (200-500) of concurrent connections which are 99.9% HTTP GET? Thanks, -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: tuning riak for 100s of concurrent connections
Sadly, the caching solution doesn't work for us. We have too much data, and the access pattern is fairly random. Making cache not a viable option. On Wed, Feb 23, 2011 at 6:38 PM, Ryan Zezeski wrote: > May I also suggest in a heavy GET scenario where the data doesn't change > often or can be slightly stale that you also make use of some sort of shortly > lived, in-memory cache. You could also make use of HTTPs conditional GET > which avoids passing the data over the wire if it hasn't changed. > > -Ryan > > [Sent from my iPhone] > > On Feb 23, 2011, at 3:08 PM, Bob Feldbauer wrote: > >> After importing data into Riak, I'm in a basically 100% GET scenario. I >> haven't done much load testing other than throwing live traffic at it, but >> with that I've seen ~150 concurrent connections. >> >> I'm using Jetty app servers (Riak Java client) which are using protobufs to >> hit an HAProxy server, which load balances my 7 Riak nodes. I really haven't >> done anything special, other than using protobufs and fronting Riak with >> HAProxy. >> >> Other than that, I would just say "use fast disks" and/or add nodes as >> needed to hit your capacity targets -- because with that traffic and 7 nodes >> with 1 1TB disk per node, I'm seeing ~0.5-<1.0 load average on Riak nodes as >> iowait. >> >> Feel free to hop on the #riak IRC channel and I'd be happy to try to answer >> any other questions about our setup. >> >> - Bob Feldbauer >> >>> -- Forwarded message -- >>> From: Wilson MacGyver >>> Date: Tue, Feb 22, 2011 at 7:15 PM >>> Subject: tuning riak for 100s of concurrent connections >>> To: riak-users Users >>> >>> >>> Are there any guidelines/howtos on tuning riak nodes for 100s >>> (200-500) of concurrent connections >>> which are 99.9% HTTP GET? >>> >>> Thanks, >>> >>> -- >>> Omnem crede diem tibi diluxisse supremum. >>> >>> ___ >>> riak-users mailing list >>> riak-users@lists.basho.com >>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com >> >> >> ___ >> riak-users mailing list >> riak-users@lists.basho.com >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: tuning riak for 100s of concurrent connections
We too are using haproxy, though we are using HTTP client instead of protocol buffer. When we developed it, protobuf support didn't exist in java client yet. server load isn't the issue, the issue is any GET seems to be taking around 80ms to return data. On Wed, Feb 23, 2011 at 3:08 PM, Bob Feldbauer wrote: > After importing data into Riak, I'm in a basically 100% GET scenario. I > haven't done much load testing other than throwing live traffic at it, but > with that I've seen ~150 concurrent connections. > > I'm using Jetty app servers (Riak Java client) which are using protobufs to > hit an HAProxy server, which load balances my 7 Riak nodes. I really haven't > done anything special, other than using protobufs and fronting Riak with > HAProxy. > > Other than that, I would just say "use fast disks" and/or add nodes as > needed to hit your capacity targets -- because with that traffic and 7 nodes > with 1 1TB disk per node, I'm seeing ~0.5-<1.0 load average on Riak nodes as > iowait. > > Feel free to hop on the #riak IRC channel and I'd be happy to try to answer > any other questions about our setup. > > - Bob Feldbauer > >> -- Forwarded message -- >> From: Wilson MacGyver >> Date: Tue, Feb 22, 2011 at 7:15 PM >> Subject: tuning riak for 100s of concurrent connections >> To: riak-users Users >> >> >> Are there any guidelines/howtos on tuning riak nodes for 100s >> (200-500) of concurrent connections >> which are 99.9% HTTP GET? >> >> Thanks, >> >> -- >> Omnem crede diem tibi diluxisse supremum. >> >> ___ >> riak-users mailing list >> riak-users@lists.basho.com >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: tuning riak for 100s of concurrent connections
did you change the size of the thread pool? on the default setting of 64, I've noticed some high startup time with the GET. On Wed, Feb 23, 2011 at 3:08 PM, Bob Feldbauer wrote: > After importing data into Riak, I'm in a basically 100% GET scenario. I > haven't done much load testing other than throwing live traffic at it, but > with that I've seen ~150 concurrent connections. > > I'm using Jetty app servers (Riak Java client) which are using protobufs to > hit an HAProxy server, which load balances my 7 Riak nodes. I really haven't > done anything special, other than using protobufs and fronting Riak with > HAProxy. > > Other than that, I would just say "use fast disks" and/or add nodes as > needed to hit your capacity targets -- because with that traffic and 7 nodes > with 1 1TB disk per node, I'm seeing ~0.5-<1.0 load average on Riak nodes as > iowait. > > Feel free to hop on the #riak IRC channel and I'd be happy to try to answer > any other questions about our setup. > > - Bob Feldbauer > >> -- Forwarded message -- >> From: Wilson MacGyver >> Date: Tue, Feb 22, 2011 at 7:15 PM >> Subject: tuning riak for 100s of concurrent connections >> To: riak-users Users >> >> >> Are there any guidelines/howtos on tuning riak nodes for 100s >> (200-500) of concurrent connections >> which are 99.9% HTTP GET? >> >> Thanks, >> >> -- >> Omnem crede diem tibi diluxisse supremum. >> >> ___ >> riak-users mailing list >> riak-users@lists.basho.com >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: tuning riak for 100s of concurrent connections
we are using ec2 7.5GB 64bit instances, with EBS. all riak data are stored on the EBS. all ec2 instances are on the same zone, communicating using the private ip. the 80ms timing we are seeing are GET requests using the private ip also. On Fri, Feb 25, 2011 at 9:14 AM, Jeremiah Peschka wrote: > Here's where the infrastructure guy in me kicks in: > What kind of drives are you using in your machines? What kind of NICs do you > have? Are the switches and NICs all forced to full duplex or are they set to > auto-configure? > Theoretically, you should be seeing performance much faster than 80ms on > GETs, or so I'd think. -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: tuning riak for 100s of concurrent connections
TCP_NODELAY is something you'd set when you use the socket API call, not a global tuning setting on the OS as I recall. On Fri, Feb 25, 2011 at 9:28 AM, Nico Meyer wrote: > Whenever I see latencies which are roughly multiples of 40ms it screams > to me 'nagle algorithm'. I have seen this so often now, that the first > thing I check is, if the TCP_NODELAY option is set on the TCP socket on > both ends. -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: tuning riak for 100s of concurrent connections
another reason to upgrade to 0.14, we are running 0.13. On Fri, Feb 25, 2011 at 11:37 AM, Sean Cribbs wrote: > You can disable Nagle on the riak side (at least on 0.14 and later). Put this > in the riak_core section of app.config: > > {disable_http_nagle, true} > > Sean Cribbs > Developer Advocate > Basho Technologies, Inc. > http://basho.com/ > > On Feb 25, 2011, at 9:33 AM, Wilson MacGyver wrote: > >> TCP_NODELAY is something you'd set when you use the socket API call, >> not a global tuning setting on the OS as I recall. >> >> On Fri, Feb 25, 2011 at 9:28 AM, Nico Meyer wrote: >>> Whenever I see latencies which are roughly multiples of 40ms it screams >>> to me 'nagle algorithm'. I have seen this so often now, that the first >>> thing I check is, if the TCP_NODELAY option is set on the TCP socket on >>> both ends. >> >> >> >> -- >> Omnem crede diem tibi diluxisse supremum. >> >> ___ >> riak-users mailing list >> riak-users@lists.basho.com >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: tuning riak for 100s of concurrent connections
I've tried it, it didn't have much impact. A bit more info on how I'm doing the test. I'm using apachebench. I purposely have it grab the same key/value over and over again. if I use concurrent connection of 10, 1000 requests each. 50% of the request complets within 7ms, longest request is 38ms. this is quite good. now, if I increase the concurrent connection to 100, 50% of the request complets at 77ms. it "feels" like a paying for startup cost of connection problem to me. Is there a way to purposely startup riak with a bunch of standby workers, or something to that effect? On Fri, Feb 25, 2011 at 11:37 AM, Sean Cribbs wrote: > You can disable Nagle on the riak side (at least on 0.14 and later). Put this > in the riak_core section of app.config: > > {disable_http_nagle, true} > > Sean Cribbs > Developer Advocate > Basho Technologies, Inc. > http://basho.com/ > > On Feb 25, 2011, at 9:33 AM, Wilson MacGyver wrote: > >> TCP_NODELAY is something you'd set when you use the socket API call, >> not a global tuning setting on the OS as I recall. >> >> On Fri, Feb 25, 2011 at 9:28 AM, Nico Meyer wrote: >>> Whenever I see latencies which are roughly multiples of 40ms it screams >>> to me 'nagle algorithm'. I have seen this so often now, that the first >>> thing I check is, if the TCP_NODELAY option is set on the TCP socket on >>> both ends. >> -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: tuning riak for 100s of concurrent connections
It just occurred to me that I should be more clear. When I say 100s of concurrent connection to riak. I meant per node, not per riak cluster. ie, I don't mean handling 100 connections into 6 nodes. I meant enough connection to result 100 connections per node. -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: tuning riak for 100s of concurrent connections
any advise to solve this? On Fri, Feb 25, 2011 at 1:05 PM, Sean Cribbs wrote: > Yes, the majority of the cost is probably in TCP setup. That pain is > happening at the TCP stack level, not in Erlang. It's actually really cheap > and easy to spawn a new process in Erlang, so there's no reason to have > "standby workers", especially when they don't need to maintain state between > requests. > > Sean Cribbs > Developer Advocate > Basho Technologies, Inc. > http://basho.com/ > > On Feb 25, 2011, at 12:45 PM, Wilson MacGyver wrote: > >> I've tried it, it didn't have much impact. A bit more info on how I'm >> doing the test. >> >> I'm using apachebench. I purposely have it grab the same key/value over and >> over >> again. >> >> if I use concurrent connection of 10, 1000 requests each. 50% of the >> request complets >> within 7ms, longest request is 38ms. this is quite good. >> >> now, if I increase the concurrent connection to 100, 50% of the >> request complets at 77ms. >> >> it "feels" like a paying for startup cost of connection problem to me. >> Is there a way to purposely >> startup riak with a bunch of standby workers, or something to that effect? >> >> On Fri, Feb 25, 2011 at 11:37 AM, Sean Cribbs wrote: >>> You can disable Nagle on the riak side (at least on 0.14 and later). Put >>> this in the riak_core section of app.config: >>> >>> {disable_http_nagle, true} >>> >>> Sean Cribbs >>> Developer Advocate >>> Basho Technologies, Inc. >>> http://basho.com/ >>> >>> On Feb 25, 2011, at 9:33 AM, Wilson MacGyver wrote: >>> >>>> TCP_NODELAY is something you'd set when you use the socket API call, >>>> not a global tuning setting on the OS as I recall. >>>> >>>> On Fri, Feb 25, 2011 at 9:28 AM, Nico Meyer wrote: >>>>> Whenever I see latencies which are roughly multiples of 40ms it screams >>>>> to me 'nagle algorithm'. I have seen this so often now, that the first >>>>> thing I check is, if the TCP_NODELAY option is set on the TCP socket on >>>>> both ends. >>>> >> >> -- >> Omnem crede diem tibi diluxisse supremum. >> >> ___ >> riak-users mailing list >> riak-users@lists.basho.com >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: tuning riak for 100s of concurrent connections
Another possibility that comes to mind to avoid the setup cost is to do connection pool of some sorts. Does any of the riak client lib supports connection pooling? On Fri, Feb 25, 2011 at 1:25 PM, Sean Cribbs wrote: > Now you're talking about tweaking kernel-level settings -- something well > outside my expertise (although TCP slow-start comes to mind). That said, > it's kind of hard not to use TCP if you want a reliable connection, warts and > all. > > Sean Cribbs > Developer Advocate > Basho Technologies, Inc. > http://basho.com/ > > On Feb 25, 2011, at 1:06 PM, Wilson MacGyver wrote: > >> any advise to solve this? >> >> On Fri, Feb 25, 2011 at 1:05 PM, Sean Cribbs wrote: >>> Yes, the majority of the cost is probably in TCP setup. That pain is >>> happening at the TCP stack level, not in Erlang. It's actually really >>> cheap and easy to spawn a new process in Erlang, so there's no reason to >>> have "standby workers", especially when they don't need to maintain state >>> between requests. >>> -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: tuning riak for 100s of concurrent connections
SO_REUSEADDR is also something you set at the socket API as I recall. So I don't think it's something you can just set on the TCP/IP itself as a global setting. On Fri, Feb 25, 2011 at 1:41 PM, Les Mikesell wrote: > Those settings shouldn't make a big difference in how the number of > connections scale up, though. There is a theoretical maximum rate limit for > creating new connections as each socket is supposed to sit in TIME_WAIT for > a packet round-trip time to ensure that nothing outstanding will collide > with that socket number when it is reused for the same IP address. Maybe > your test is hitting that limit. Can you set SO_REUSEADDR? -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: tuning riak for 100s of concurrent connections
It's not an actual use case. the actual use case is in fact fairly random access pattern to the keys. it's however a 6 node system, with 3 copies. I'd assume riak can cope with this. I'd also figure the OS level cache would cache the bitcask chunk pretty quickly. On Fri, Feb 25, 2011 at 2:38 PM, Ryan Zezeski wrote: > > > On Fri, Feb 25, 2011 at 12:45 PM, Wilson MacGyver > wrote: >> >> I purposely have it grab the same key/value over and over >> again. >> > > Could the fact that your getting the same key for each request also be a > factor here? Wouldn't this mean high contention on those vnodes? Is this > an actual use case? > -Ryan > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: tuning riak for 100s of concurrent connections
was this over http GET or protocol buffer? idid you per chance run the test using apachebench also? On Fri, Feb 25, 2011 at 2:43 PM, Nico Meyer wrote: > Just out of curiosity I did some tests myself on one of our production > machines. We normally only use the ProtocolBuffers interface and a > thrift interface that we wrote ourselves. > > If I fetch a small key (~250 bytes), the riak server becomes CPU bound > with about 20 concurrent requests, at which point the latency naturally > becomes larger. At this point one riak server is handling over 6000 > requests/s. This is on an 8 core system (Dual Intel Xeon E5506 > @2.13GHz), and all cores are at nearly 100%. No tuning was done on > either riak or the OS. > > Also 99% of the requests still took less then 20ms, while with 10 > concurrent requests its more like 2ms. > > > Am Freitag, den 25.02.2011, 13:54 -0500 schrieb Wilson MacGyver: >> SO_REUSEADDR is also something you set at the socket API as I recall. >> So I don't think it's something you can just set on the TCP/IP itself >> as a global setting. >> >> On Fri, Feb 25, 2011 at 1:41 PM, Les Mikesell wrote: >> > Those settings shouldn't make a big difference in how the number of >> > connections scale up, though. There is a theoretical maximum rate limit >> > for >> > creating new connections as each socket is supposed to sit in TIME_WAIT for >> > a packet round-trip time to ensure that nothing outstanding will collide >> > with that socket number when it is reused for the same IP address. Maybe >> > your test is hitting that limit. Can you set SO_REUSEADDR? >> >> > > > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: tuning riak for 100s of concurrent connections
right, which is why I purposely accuess the same one over and over again. and disable access to the riak cluster from all other systems. On Fri, Feb 25, 2011 at 3:00 PM, Alexander Sicular wrote: > os level cache caches chunks but shuffles when your access pattern is... > random. caveat lots of factors. > > -Alexander Sicular > > @siculars > > On Feb 25, 2011, at 2:57 PM, Wilson MacGyver wrote: > >> It's not an actual use case. the actual use case is in fact fairly >> random access pattern to >> the keys. >> >> it's however a 6 node system, with 3 copies. I'd assume riak can cope with >> this. >> >> I'd also figure the OS level cache would cache the bitcask chunk pretty >> quickly. >> >> On Fri, Feb 25, 2011 at 2:38 PM, Ryan Zezeski wrote: >>> >>> >>> On Fri, Feb 25, 2011 at 12:45 PM, Wilson MacGyver >>> wrote: >>>> >>>> I purposely have it grab the same key/value over and over >>>> again. >>>> >>> >>> Could the fact that your getting the same key for each request also be a >>> factor here? Wouldn't this mean high contention on those vnodes? Is this >>> an actual use case? >>> -Ryan >>> >> >> >> >> -- >> Omnem crede diem tibi diluxisse supremum. >> >> ___ >> riak-users mailing list >> riak-users@lists.basho.com >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: tuning riak for 100s of concurrent connections
If only we wrote the webapp in haskell :) On Fri, Feb 25, 2011 at 4:35 PM, Bryan O'Sullivan wrote: > On Fri, Feb 25, 2011 at 12:17 PM, Joseph Blomstedt wrote: >> >> For protocol buffers there are client libraries that support >> connection pooling. Of the top of my head I know riak-js does, and [...] > > My Haskell client does, too: https://github.com/mailrank/riak-haskell-client > It also supports request pipelining, which I believe is unique among Riak > client libraries. You can fire off a few thousand requests over the wire and > start receiving results while still sending, and this works for automated > vclock conflict resolution, too. -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak Recap for March 14 - 15
On Wed, Mar 16, 2011 at 7:39 PM, Mark Phillips wrote: > On a related note, @wmacgyver mentioned he is also using Riak, Redis > and PostgreSQL: > http://twitter.com/wmacgyver/statuses/48105509718458368 . Out of > curiosity, anyone else using these three DBs together in their stack? I'll expand a bit on this. We use pgsql for analytics stuff still. things that require count ops, join ops, and things of that nature. We have fairly large dataset, so those things take a while to run. We also use pgsql also things that require transactions. redis is really more being used as memcached++. It's fast, and don't lose data on restart. -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Java client API
On Mar 23, 2011, at 2:25 PM, Jon Brisbin wrote: > I personally don't like using ambiguous short names. I get that you can > technically distinguish things by package name but using a simple name that > is very common isn't good for code readability. "RiakEntry" is unambiguous, > self-documenting, and semantically correct. "Entry" would, in my mind, > represent a very generic interface that defined various kinds of "Entry"s of > things. > Likewise, I'd like to suggest RiakDoc > This feels like a Grails GORM convention. Using finders and calling save > directly on the object you're working with. I guess it's more a matter of > personal preference, but I'm not sure I like it as a general rule. It's not a > very object-oriented approach, either, as it mixes concerns a little too > much, in my mind. > I like it actually, though not surprising. Since I am using grails with riak. All my standalone scripts are written in groovy but make use of riak java client. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Java client API
two questions jump out right away 1: how do you fetch linked records using the API? and how do you setup link for records? 2: it's not obvious from reading the code how to set the r and w value during read and write. is it using the rw() method? Thanks -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Java client API
Hey, Java users are people too. :) Besides the kind of companies that can afford Riak enterprise, what do you think they use? :) While I use groovy with the riak java client, any java client lib improvement will also benefit all the other jvm languages like clojure, scala, etc. On Mar 27, 2011, at 11:40 PM, Alexander Sicular wrote: > Like java, this thread is incredibly verbose and will never die. Discuss ;) ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak Development Environments
In our case, because we use ec2 7.5gb instance with ebs. The macmini setup I tweeted is a good match to production nodes. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: webcast on riak_core?
The topic I'd really like coverage is, what can riak-core do that other stack including erlang+otp can't do. Basically, what awesome unique sauce does it bring to the table that you would want to ditch your favorite stack? ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Retrying requests to Riak
use haproxy to ping each of the riak node. and remove nodes that aren't responding is what we did. we use a single haproxy instead of 1 per application server. On Thu, Apr 7, 2011 at 9:47 PM, Greg Nelson wrote: > Hello, > I have a simple three node cluster that I have been using for testing and > benchmarking Riak. Lately I've been simulating various failure scenarios -- > like a node going down, disk going bad, etc. > My application talks to Riak through an haproxy instance running locally on > each application server. It's configured to round-robin over the nodes in > the cluster for both HTTP and PBC interfaces, and uses the HTTP /ping health > check. I assume this is a rather typical setup. -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Retrying requests to Riak
you can still set each of the haproxy to ping each riak node and do the remove automatically. your app can handle failure by retry on timeout, since you have it set to round robin. the next retry will hit a working node. On Thu, Apr 7, 2011 at 11:27 PM, Greg Nelson wrote: > I don't want to have a single load balancer because I want to avoid a single > point of failure. And we'll be pushing enough data that it would be a huge > bottleneck. > A failed node will not receive new requests, but when the requests that were > sent to it fail I'd like to retry those automatically instead of having > errors bubble up to our application. > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Random questions
On Mon, Apr 18, 2011 at 6:12 PM, Jon Brisbin wrote: > > If true atomicity is really a concern, then use Redis and write a pub/sub > handler to update your Riak documents whenever things change. > > You know we could probably take the Riak RabbitMQ postcommit hook and adapt > it to use Redis for something along these lines... :) that'd be some insanely crazy R R R combo chain :) -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
replacing a node
In riak, if I want to replace a node in case of upgrading the hardware, is the following step correct? tar up riak/data on old machine setup riak on the new machine, but don't start it copy riak/data to the new machine and untar it run reip (because I couldn't keep the old IP for some reason) stop the old machine via riak stop start the new machine with riak start Thanks -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
setting r value on a per bucket basis
Hi, I know you can set n value on a per bucket basis. and you can set r value on client side for read. can you set a r value on a per bucket basis too so that you don't have to require all client HTTP GET to have to pass r=somevalue? Thanks -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: setting r value on a per bucket basis
Thank you. I tried googling for it, I guess my google-fu is weak :) On Aug 3, 2011, at 1:15 PM, Brett Hoerner wrote: > Yup, see r, w, etc here: http://wiki.basho.com/HTTP-Set-Bucket-Properties.html > > > > On Wed, Aug 3, 2011 at 11:55 AM, Wilson MacGyver wrote: >> Hi, >> >> I know you can set n value on a per bucket basis. >> >> and you can set r value on client side for read. >> >> can you set a r value on a per bucket basis too so that you don't have >> to require >> all client HTTP GET to have to pass r=somevalue? >> >> Thanks >> >> -- >> Omnem crede diem tibi diluxisse supremum. >> >> ___ >> riak-users mailing list >> riak-users@lists.basho.com >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com >> ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: replacing a node
oh, you have to reip EVERYWHERE, not just the 1 node? good thing I haven't done it yet :) On Mon, Aug 8, 2011 at 8:46 AM, Sean Cribbs wrote: > Wilson, > Yes, that procedure should work. You'll also probably need to reip on all > machines in order to get the name changed around the ring. > > On Sun, Jul 31, 2011 at 6:59 PM, Wilson MacGyver > wrote: >> >> In riak, if I want to replace a node in case of upgrading the >> hardware, is the following step correct? >> >> tar up riak/data on old machine >> >> setup riak on the new machine, but don't start it >> >> copy riak/data to the new machine and untar it >> >> run reip (because I couldn't keep the old IP for some reason) >> >> stop the old machine via riak stop >> start the new machine with riak start >> >> >> Thanks >> >> >> -- >> Omnem crede diem tibi diluxisse supremum. >> >> ___ >> riak-users mailing list >> riak-users@lists.basho.com >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > > > -- > Sean Cribbs > Developer Advocate > Basho Technologies, Inc. > http://www.basho.com/ > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
GET vs map reduce
Hi, I've been using map reduce to fake a bulk get. I do a post against map reduce interface, using the following {"inputs": [["bucket1", "k1"], ["bucket1", "k2"], ["bucket1", "k3"], ["bucket1", "k4"], ["bucket1", "k5"], ["bucket1", "k6"], ["bucket1", "k7"], ["bucket1", "k8"], ["bucket1", "k9"] ], "query":[{"map":{"language":"javascript","source":"function(v) { return [v.key,v .values[0].data]; }"}}]} as you can see, it's very straight forward, I'm just passing the list of buckets and keys, and then return them. I also set the R to 1. it's been working fine. but lately, as traffic begin to increase, we started seeing time out errors on the map reduce call. the strange thing is, if I issue GET on each key, results are coming back without any problem. are there any subtle differences between GET vs map reduce that I'm not understanding? Thanks, -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: GET vs map reduce
this is certainly a tricky problem for us, because there are times we need to perform 9 to 100 GETs on per page render basis. some bullt-in support would certainly be useful On Fri, Aug 19, 2011 at 10:12 AM, Jacques wrote: > This begs the question, is there much efficiency to gain by creating a true > multi-get? It seems like a number of people are trying to get the most > efficient multi-get possible. > If I remember, it was closed as a wontfix a couple years ago. Did you guys > at Basho find that it just didn't have that much impact on performance? > Thanks, > Jacques > > On Fri, Aug 19, 2011 at 5:45 AM, David Smith wrote: >> >> As Matt and Jacques noted, M/R is really not intended to be used in >> this manner (i.e. for multi-get), particularly if you're interested in >> latency. Generally, you will wind up involving more nodes on a M/R >> request and shipping the "get" results across those nodes as the >> system tries to distribute M/R. >> >> D. > > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak in the cloud?
We are using it on amazon with no issues. we are using 7.5gb instances. You can reduce the cost by doing reserve instances. We are still using bitcask, which makes amount of ram per instance very important due to each node need to keep all keys it's responsible for in the ram. It's a trade off, is raw metal nice? Sure. However, ec2 gives you advantage compare to your own co-lo datacenter. On Jan 17, 2012, at 3:35 PM, Tom Davies wrote: > Hey there, > > I am considering Riak for a new project. One possible show stopper > for me is I believe Riak is not recommended for use on a VM which > would make cloud deployments (e.g. rackspace, ec2, etc) a no go. Also, > I read somewhere else the minimum EC2 deployment recommendation was 3 > large instances which would run over $700 / month. Is it possible to > run Riak in the cloud and if so what is the minimum recommended > configuration? Also, I'd be interested in hearing any recommendations > for initial configuration in a cloud environment. > > Thanks! > -- > Tom Davies > > http://teenormous.com > The Ultimate T-shirt Search Engine > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
fastest way to load data?
There are various riak clients out there right now. Some official, some not. Most are http REST based. as far as I know, the erlang native client is the only prototocol buffer based version. I remember hearing that protocol buffer version is about 10x faster than HTTP. so if I want to bulk load a lot of data, is using erlang client the fastest way? -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: fastest way to load data?
Thanks for the info. I saw the java one already, but I noticed it said the API isn't quite complete yet. :) Put me down for a vote on PBC support on official java client. Most of our backend stuff is Java. We use the official client lib with groovy. On Sun, Jul 11, 2010 at 11:54 AM, Sean Cribbs wrote: > Yes, protobuffs is the fastest interface, but you can also use Python (for > which the PBC was originally written), Java, or Ruby. > > Python: http://bitbucket.org/basho/riak-python-client/src > Java: http://github.com/krestenkrab/riak-java-pb-client/ > Ruby: http://github.com/aitrus/riak-pbclient > > More officially supported clients will be getting the PBC option in the near > future. > > Sean Cribbs > Developer Advocate > Basho Technologies, Inc. > http://basho.com/ > > On Jul 11, 2010, at 11:48 AM, Wilson MacGyver wrote: > >> There are various riak clients out there right now. >> Some official, some not. Most are http REST based. >> >> as far as I know, the erlang native client is the only prototocol buffer >> based version. >> >> I remember hearing that protocol buffer version is about 10x faster >> than HTTP. >> >> so if I want to bulk load a lot of data, is using erlang client the fastest >> way? >> >> -- >> Omnem crede diem tibi diluxisse supremum. >> >> ___ >> riak-users mailing list >> riak-users@lists.basho.com >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
riak slides?
Hi, I'm going to be giving a talk on riak sometime soon. Anyone has slides I can steal/borrow? :) Thanks -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: riak slides?
Thanks to both speedy responses. The audience is most likely new to NOSQL, and probably never heard of riak. I want to talk about what Riak is and what makes Riak better the others. I was trying to think of a good way to demo Riak. But I'm not sure if it's feasible. I mean, I don't want to lug 3-5 macminis just for the demo. Even if they are very tiny. :) Riak is a very good solution, but it doesn't have enough exposure. I'm hoping to be able to do something to help with that. The event is 1DevDay Detroit http://sites.google.com/site/1devday -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
riak 0.11 building under Erlang R140A
does riak 0.11 release src build under Erlang R140A? or do I have to hg pull from the master to do that? Thanks -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Expected vs Actual Bucket Behavior
When I did this, I thought I was abusing the concept of buckets in riak. :) > a powerful tool for data modeling. For example, sets of 1-to-1 > relationships can be very nicely represented as something like > "bucket1/keyA, bucket2/keyA, bucket3/keyA", which allows related items > to be fetched without any intermediate queries at all. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
UserMeta
I've seen this a few times in docs and client src example. But there really isn't anything I can find on why this is useful, or when to use it instead of keeping the data in the value. What is the usage case for UserMeta? -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
JSON lib for erlang
At the risk of somewhat off topic, what JSON lib does everyone use with the Erlang riak client? ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: JSON lib for erlang
Thanks everyone. It seems like it's either mochijson2 or mochijson. -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
check riak bitcask memory usage
Hi, I know one of the side effect of bitcask is, each node needs to have enough ram to hold the entire list of keys it's responsible for. is there a tool/method to check how much is being used so you can see how close to the limit you are? The alternative is to wait till bitcask blows up. that seems like a bad idea :) -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
keys in bitcask
Is the bitcask key always treated as a string, even if you pass an interger? trying to think of ways to reduce the ram usage of keys in bitcask. -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: keys in bitcask
yea, I saw that. that of course leads me to another question, when this patch comes out as part of 0.13, how will this impact the upgrade? On Sat, Aug 7, 2010 at 11:25 PM, Sean Cribbs wrote: > Dave Smith already reduced memory usage by 40% this past week, simply by > changing the hash table implementation. Rest assured he and Justin are > looking for ways to continue improving on that. > > Sean Cribbs > Developer Advocate > Basho Technologies, Inc. > http://basho.com/ > > On Aug 7, 2010, at 10:39 PM, Wilson MacGyver wrote: > >> Is the bitcask key always treated as a string, even if you pass an interger? >> >> trying to think of ways to reduce the ram usage of keys in bitcask. >> >> >> -- >> Omnem crede diem tibi diluxisse supremum. >> >> ___ >> riak-users mailing list >> riak-users@lists.basho.com >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
couchdb backend?
Looks like someone brought over couchdb backend as a storage option for riak http://matt.io/technobabble/The_Key-Value_Wars_of_the_Early_21st_Century/ui -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: check riak bitcask memory usage
Thanks. I'm more interested in just the keys memory usage. riak-admin status already reports total mem allocated, doesn't it? ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: "Dead" files in bitcask or something
how is the compact phase defined/activated? I looked through the docs and didn't see it. On Tue, Aug 17, 2010 at 11:54 AM, Alexander Sicular wrote: > Bitcask is a write only log (wol) that eats disk (by keeping all updates) > until a compaction phase that reclaims disk at some defined interval. -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
3 nodes with n set to 3
I'm curious. Let's say I only setup 3 riak nodes. and I leave n to 3. This means of course, all 3 nodes have a copy of the full set of data, due to n being 3. What happens if I then do riak stop on all 3 nodes at the same time? Each node will try to hand off data to the others before going down, but I'm shutting them down all at the same time. Do they give up because they don't have anyone to hand it over to? -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
failed to merge?
Hi, on a 3 node riak system running stock 0.12. After inserting a bunch of data repetitively, because I want to see when merge condition would trigger. I saw it happen. But in the erlang.log I saw a bunch of =ERROR REPORT Failed to merge follow by a bunch of list of bitcask files with final status : no_files_to_merge how does this happen, does this mean some files in the bitcask are missing? -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: failed to merge?
On Sat, Aug 21, 2010 at 10:09 PM, Justin Sheehy wrote: > Hi, Wilson. > All it means is that one merge was scheduled while another was > running, so the first one did all the work and the second had nothing > to do. That makes me feel MUCH better :) -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
can this actually happen?
I have a 3 node test server (actually 3 machines) to test the behavior/size requirement of the keys. I have n set to the default value 3. I started blasting large amount of keys/data into the 3 nodes at the same time. At some point, it was too much and one of the nodes crashed with a out of memory error. at this point, I riak stop the other 2 nodes that were still alive at the same time. I looked at memory and disk usage, then restarted all 3 of them. It was then I noticed that some of the keys appears to be missing. can this actually happen with riak? I'm checking through my data upload logs right now, but I figure I should ask to be on the safe side. -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: system_memory_high_watermark
assuming you are using bitcask. maybe you have too many keys so that the single computer ram is being used up by both riak nodes, and there isn't enough ram to hold them? On Tue, Aug 31, 2010 at 12:54 PM, Michael Colussi wrote: > Hey guys, > I've been successfully running component tests where I launch two Riak nodes > on the same computer and join them. > I'm trying to update my Riak version 0.12, but as soon as I join the nodes I > get a system_memory_high_watermark alarm and then both Riak instances crash. > Has anyone else experienced this? Is there a workaround? > And yes, the nodes have different names. > Thanks, > Mike > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak and no of clients limit?
Just use HAproxy. http://haproxy.1wt.eu/ On Sep 4, 2010, at 6:37 PM, OJ Reeves wrote: > The plan was to create a Riak client connection for every request coming in > (maintained for the lifetime of the request). That connection would connect > to any one of the nodes in the Riak cluster (via some simple algo such > round-robin). I was also thinking of building something into this mechanism > which would be able to manage an open list of potential target nodes to > connect to and handle when new nodes join the cluster or when nodes go > offline in an effort to make sure that the web application will "always" know > of a valid node to connect to. > > Does this seem like overkill? Would the effort in implementing such behaviour > be worth the reward? > > Many thanks for any insights. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: bitcask 42?
So base on this, is it fair to say the default value of 64 is only suitable for up to 10 nodes? Thanks, -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Memory footprint
Yes, it's only for the keys that the node is responsible for. On Oct 8, 2010, at 10:11 AM, Tony Novak wrote: > Quick question about Riak's memory footprint: I know that Bitcask > requires all keys to fit in memory. Does this mean only the keys that > reside on a given node, or does each node hold the keys of the entire > system? I assume it's the former, but I just want to be sure. > > Basically what I'm trying to figure out is whether we should expect > better performance from a cluster of N nodes with 2*M GB of memory > each, or 2*N nodes with M GB each. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak in user mode
are you installing it from src? because I run it as a non-root user and it never requested sudo On Mon, Oct 11, 2010 at 10:57 AM, Mojito Sorbet wrote: > My question is not about installing it as root, but running it as root. > The "riak start" script immediately tries to do a sudo. Why is this > necessary? If I could configure it so the data files do not go > into /var, for example. > > It seems to me this is independent from how I built or installed it. > > > On Mon, 2010-10-11 at 10:43 -0400, Alexander Sicular wrote: >> The only part of the Riak 'world' that probably is easiest installed as root >> is erlang. Although you could install erlang as non root. Once you have >> erlang installed, you can compile riak as non root and it works fine. >> >> -alexander >> >> On Oct 11, 2010, at 9:19 AM, Mojito Sorbet wrote: >> >> > If I do not want Riak to run as 'root', do I have to run it from a >> > compiled installation rather than the binary installation? >> > >> > >> > ___ >> > riak-users mailing list >> > riak-users@lists.basho.com >> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com >> > > > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
erlang R13 or R14?
with the new riak 0.13 release as well as riak-search, is erlang R13 still the recommended production usage, or has it been bumped to R14B? -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Upgrade procedure from 0.12 to 0.13
I thought it's possible to do "rolling upgrade". ie, shutdown 1 node, upgrade it, and reconnect the node to the cluster. otherwise you'd need 2X of the nodes in production. On Wed, Oct 13, 2010 at 1:21 PM, Alexander Sicular wrote: > I'm pretty sure that Baho would recommend that all nodes in a cluster should > be running the same version. An upgrade would probably require doing a full > database backup and a full database restore on a *new* cluster. > > -Alexander > > On Oct 13, 2010, at 10:06 AM, SKester wrote: > >> Hey folks, >> >> What is the proper way to upgrade a cluster from 0.12 to 0.13? Our 4 node >> cluster was originally created by installing the 0.12 rpms. Is the process >> as simple as stopping riak on a node, installing the new rpm and >> re-starting? Is it possible to roll through each node one at a time while >> the others are still available, or do all nodes need to be halted and >> upgraded at the same time? I have backed up the /etc/riak directory to save >> the initial config data. Are the 0.12 config files fully compatible with >> 0.13? >> >> Thanks, >> Scott >> ___ >> riak-users mailing list >> riak-users@lists.basho.com >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > > -Alexander Sicular > > @siculars > > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Upgrade procedure from 0.12 to 0.13
Great, time to buy 10 more macminis :) On Wed, Oct 13, 2010 at 1:34 PM, David Smith wrote: > To the best of my knowledge, there are no changes between 0.12->0.13 > that would prevent a rolling upgrade. That said, we have not tested > that specific use-case yet -- it's in the pipeline to be part of our > pre-release automated testing. > > I'd recommend you'd try it on a QA cluster before doing it live...but > it should generally work. :) > > D. -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
riak search and client support
I want to make sure my understanding is correct. riak-search is a super set of riak. as such, any existing riak client lib (the java one for example), can "PUT" data into riak-search. but only PHP, Python, Ruby, and Erlang riak client can execute a "search" from within the client lib. However, you can always just perform the search using HTTP REST right? We use the java client most of the time for data injection. -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Data Recovery
you mean beyond the backup and restore command here? https://wiki.basho.com/display/RIAK/Command-Line+Tools#Command-LineTools-backup On Mon, Oct 25, 2010 at 8:14 PM, Cagdas Tulek wrote: > Hi, > Is there an article or tutorial about how to backup and recover a server? > Best, > Cagdas > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
riak-search default_field
can you have more than 1 fields specified for default_field in the search schema? -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
riak-search more field types?
So far in the documentation, only string and integer are mentioned as types. how about floats or timestamps? -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: riak-search default_field
thanks for the info. that means for multiple fields search as default. We'd have to create some sort of combined field On Wed, Nov 17, 2010 at 4:07 PM, Dan Reverri wrote: > Only 1 field can be specified for the default_field property. -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: riak-search default_field
were you guys using "copyField" in solr to fill "any"? :) On Wed, Nov 17, 2010 at 5:18 PM, Neville Burnell wrote: > heh, in a previous job working with Solr, my team created the "any" field > > On 18 November 2010 08:49, Wilson MacGyver wrote: >> >> thanks for the info. that means for multiple fields search >> as default. We'd have to create some sort of combined field >> >> >> On Wed, Nov 17, 2010 at 4:07 PM, Dan Reverri wrote: >> > Only 1 field can be specified for the default_field property. >> >> >> -- >> Omnem crede diem tibi diluxisse supremum. >> >> ___ >> riak-users mailing list >> riak-users@lists.basho.com >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
total count of keys
is there a way to get the total number of keys in the entire riak cluster without having to do a map/reduce count? Thanks, -- Season's Greetings! ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: total count of keys
Thanks, another thing to keep on postgres. On Wed, Nov 24, 2010 at 5:15 PM, Alexander Sicular wrote: > Not that I know of. I would imagine you would have to list buckets (erlang > only now, may change shortly) then m/r buckets and sum. > > -Alexander > > On Nov 24, 2010, at 5:12 PM, Wilson MacGyver wrote: > >> is there a way to get the total number of keys in the entire riak >> cluster without having >> to do a map/reduce count? >> >> Thanks, >> >> -- >> Season's Greetings! >> >> ___ >> riak-users mailing list >> riak-users@lists.basho.com >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > > -Alexander Sicular > > @siculars > > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
java client concurrent connection
Hi, I'm using the riak java client. according to http://bitbucket.org/jonjlee/riak-java-client/src if you use the following config.setMaxConnections(50); this let the riak client opens 50 concurrent connections. in practice, I'm not seeing that. I set it to 250, and still all I see is 1 inbound connection on riak node if I use netstat to view list of inbound connections. Is it working? What am I missing? Thanks -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: java client concurrent connection
Thanks, I'll give that a try. On Mon, Dec 6, 2010 at 7:21 AM, Jan Buchholdt wrote: > The config.setMaxConnections is not connected to the underlying > httpConnectionManager. I guess that is an error in the implementation of the > RiakClient. Try to use something like: > > RiakClient riakClient = new com.basho.riak.client.RiakClient(config); > MultiThreadedHttpConnectionManager cm = (MultiThreadedHttpConnectionManager) > riakClient.getHttpClient().getHttpConnectionManager(); > HttpConnectionManagerParams params = cm.getParams(); > params.setDefaultMaxConnectionsPerHost(50); -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: In riak erlang client, why do we have to surround the parameter with << and >> ?
it's here http://www.erlang.org/doc/reference_manual/expressions.html#id75794 On Thu, Feb 3, 2011 at 9:13 PM, Joshua Partogi wrote: > Hi, > > In riak erlang client, why do we have to surround the parameter with << and >>> ? I can not find what this means in erlang documentation. > > Thanks heaps for your help. > > Kind regards, > Joshua. > > -- > http://twitter.com/jpartogi > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
riak java client 0.14 changelog?
Hi, I noticed riak java client have been updated to 0.14, but there are no info beyond the original 0.11 version changelog. Any place I can find out what the changes are? also, any more progress on protocol buffer support yet? Thanks, -- Omnem crede diem tibi diluxisse supremum. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: riak java client 0.14 changelog?
Thanks, will give it a try ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com