you can still set each of the haproxy to ping each riak node and do
the remove automatically.
your app can handle failure by retry on timeout, since you have it set
to round robin. the next retry
will hit a working node.
On Thu, Apr 7, 2011 at 11:27 PM, Greg Nelson wrote:
> I don't want to have
I don't want to have a single load balancer because I want to avoid a single
point of failure. And we'll be pushing enough data that it would be a huge
bottleneck.
A failed node will not receive new requests, but when the requests that were
sent to it fail I'd like to retry those automatically
use haproxy to ping each of the riak node. and remove nodes that
aren't responding is
what we did.
we use a single haproxy instead of 1 per application server.
On Thu, Apr 7, 2011 at 9:47 PM, Greg Nelson wrote:
> Hello,
> I have a simple three node cluster that I have been using for testing and
I don't use haproxy (we use hardware load balancers), but if I did, I
would want to handle it at that layer instead of in the riak client.
However, If you're using the java client you can implement a custom
HttpClient retry handler to do this:
http://hc.apache.org/httpcomponents-client-ga/tutorial
Hi Gui,
I recently pushed 70 million records of size 1K each into a 5-node
Riak cluster (which was replicating to another 5-node cluster) at
around 1000 writes/second using basho_bench and the REST interface. I
probably could have pushed it further, but I wanted to confirm that it
could maintain
Tried the protocol buffers interface? What method are you using to insert
records? I know you said REST but if it's a big script with curl, that may not
be the best way.
Sent from my iPhone
On Apr 7, 2011, at 5:04 PM, Rexxe wrote:
> What are your w and dw values set to? What backend are yo
Hello,
I have a simple three node cluster that I have been using for testing and
benchmarking Riak. Lately I've been simulating various failure scenarios --
like a node going down, disk going bad, etc.
My application talks to Riak through an haproxy instance running locally on
each applicatio
What are your w and dw values set to? What backend are you using?
On Thu, Apr 7, 2011 at 3:01 PM, Gui Pinto wrote:
> Hey guys,
> I'm attempting to importing 300M+ objects into a Riak cluster, but have
> quickly reached the REST API's speed-limit at 200-store()'s per second..
> At the rate of 20
Hey guys,
I'm attempting to importing 300M+ objects into a Riak cluster, but have
quickly reached the REST API's speed-limit at 200-store()'s per second..
At the rate of 200/s, I'm looking at 20-days to import this data set! That
can't be the fastest method to do this..
Any recommendations?
Thank
The protocol buffers API operates over a different port. Check the
app.config file of each Riak node for the pb_port parameter to find out what
port each node is listening on. You'll need to update your HAProxy config to
load balance the pb_ports.
Thanks,
Dan
Daniel Reverri
Developer Advocate
Bas
On Apr 6, 2011, at 10:51 PM, Mike Oxford wrote:
> Skimmed it, killer job. Going to require more time than I have right now but
> I'm excited to get some time to go over it.
>
> Thanks for making this available!
Ditto! :)
jb
>
> -mox
>
> On Wed, Apr 6, 2011 at 8:40 PM, Ryan Zezeski wrote:
Hello Riak folks,
We have a 3-node riak cluster on my local box with haproxy load balancer
running on port 9190 for the riak cluster. I've attached the haproxy conf file
for your reference.
We use riak-python-client to connect to Riak. While connecting through the HTTP
interface, the riak cli
12 matches
Mail list logo