I'm with Kyle on this one. Even better, my 'newhttp' branch on Github
enables this kind of multiple-connection and automatic fail-over.

That branch does have a basic sketch for automatic addition/removal of
Riak nodes as you manipulate your cluster. I'll need it one day, but
not "now", so I haven't finished it yet (the monitor.py background
thread).

Regarding security: it is the same for option A and B and C (you're
just shifting stuff around, but it is pretty much all the same). Put
your webservers in one security group, and the Riak nodes in another.
Open the Riak ports *only* to the webserver security group and to each
other.

Avoiding two services on one machine (e.g web + riak) is also much
easier to manage/maintain. Just have web machines and riak machines.

Cheers,
-g

On Tue, Oct 4, 2011 at 17:09, Aphyr <ap...@aphyr.com> wrote:
> Option C: Deploy your web servers with a list of hosts to connect to. Have
> the clients fail over when a riak node goes down. Lower latency without
> sacrificing availability. If you're using protobufs, this may not be as big
> of an issue.
>
> --Kyle
>
> On 10/04/2011 02:04 PM, O'Brien-Strain, Eamonn wrote:
>>
>> I am contemplating two different architectures for deploying Riak nodes
>> and web servers.
>>
>> Option A:  Riak nodes are in their own cluster of dedicated machines
>> behind a load balancer.  Web servers talk to the Riak nodes via the load
>> balancer. (See diagram http://eamonn.org/i/riak-arch-A.png )
>>
>> Option B: Each web server machine also has a Riak node, and there are also
>> some Riak-only machines.  Each web server only talks to its own localhost
>> Riak node. (See diagram http://eamonn.org/i/riak-arch-B.png )
>>
>>
>> All machines will deployed as elastic cloud instances.  I will want to
>> spin up and spin down instances, particularly the web servers, as demand
>> varies.  Both load balancers are non-sticky.  Web servers are currently
>> talking to Riak via HTTP (though might change that to protocol buffers in
>> the future).  Currently Riak is configured with the default options.
>>
>> Here is my thinking of the comparative advantages:
>>
>> Option A:
>>
>>  - Better for security, because can lock down the Riak load balancer to
>> only open a single port and only for connections from the web servers.
>>  - Less churn for Riak of nodes entering and leaving the Riak cluster (as
>> web servers spin up and down)
>>  - More flexibility in scaling storage and web tiers independently of each
>> other
>>
>> Option B:
>>
>>  - Faster localhost connection from web server to Riak
>>
>> I think availability is similar for the two options.
>>
>> The web server response time is the primary metric I want to optimize.
>>  Most web server requests will cause several requests to Riak.
>>
>> What other factors should I take into account?  What measurements could I
>> make to help me decide between the architectures?  Are there other
>> architectures I should consider? Should I add memcached? Does anyone have
>> any experiences they could share in deploying such systems?
>>
>> Thanks.
>> __
>> Eamonn
>>
>> _______________________________________________
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to