Hi,
I've started to look through the Riak sources, and I've been wondering how
the system behaves in certain failure scenarios.
In particular, it seems to me that it's quite easy to get into a state where the
client thinks a PUT request failed, but the object was in fact written to
storage
and wil
Hi Jordan,
On Wed, Dec 28, 2011 at 12:51 PM, Jordan Schatz wrote:
>
> I am working on bindings for the HTTP API for lisp, the docs list new and
> old URLs for requests, but it appears that not all of the new URLs are
> implemented yet? I am using 1.0.2 is there a planned release number for
> supp
Hey all,
I'm looking for some best practices in handling connections when using the
protocol buffer client. Specifically, I have 3 nodes in my cluster, and need
to figure out how to handle the situation when one of the nodes is down.
I'm currently using a pooler app (https://github.com/seth/po
Thank you Mark : )
BTW
> > Set bucket:
> > curl -v -X PUT -H "Content-Type: application/json" -d \
> > '{"props":{"n_val":5}}' http://127.0.0.1:8098/buckets/test
> >
>
> curl -v -d 'this is a test' -H "Content-Type: text/plain" \
> http://127.0.0.1:8098/buckets/test/keys
This is needed instead:
On Fri, Dec 30, 2011 at 1:10 PM, Jordan Schatz wrote:
> Thank you Mark : )
>
> BTW
>> > Set bucket:
>> > curl -v -X PUT -H "Content-Type: application/json" -d \
>> > '{"props":{"n_val":5}}' http://127.0.0.1:8098/buckets/test
>> >
>>
>> curl -v -d 'this is a test' -H "Content-Type: text/plain" \
>>
You should look into using HAProxy in front of your nodes. Let HAProxy
load balance between all your nodes and then if one goes down, HAProxy just
pulls it out of the load balancing cluster automatically until it is
restored. Then your pooler can just pool connections from HAProxy instead
so it d
Great, thanks for the feedback. I'll check out pooly, for sure.
I was thinking about using HAProxy/Zeus (I'm currently using Riak Smartmachines
@ Joyent). I really like this idea, the logic for node failures shouldn't be
in my code. I'll give this a try!
Thanks,
Marc
On Dec 30, 2011, at
I'm confused by LevelDB's file handle usage. I had a cluster of one machine
running happily with two keys (both in the same bucket). I attached a second
node and the first node halted with "too many open files". If I try to restart
the service, it runs for a few seconds and then exits again with
On 30 dec 2011, at 20:31, Andrew Berman wrote:
>
> Also, shameless plug, I have a pooler as well which has a few more options
> than pooler. You can check it out here: https://github.com/aberman/pooly
Nice, I'll look in to that.
> --Andrew
>
> On Fri, Dec 30, 2011 at 9:58 AM, Marc Campbell