On Mon, Aug 30, 2010 at 1:02 PM, Dave Viner wrote:
> Hi Edward,
> By "down hard", I assume you mean that the machine is no longer responding
> on the cassandra thrift port. That makes sense (and in fact is what I'm
> doing currently). But, it seems like the real improvement is something that
> w
Hi Edward,
By "down hard", I assume you mean that the machine is no longer responding
on the cassandra thrift port. That makes sense (and in fact is what I'm
doing currently). But, it seems like the real improvement is something that
would allow for a simple monitor that goes beyond the simple "
On Mon, Aug 30, 2010 at 12:40 PM, Dave Viner wrote:
> FWIW - we've been using HAProxy in front of a cassandra cluster in
> production and haven't run into any problems yet. It sounds like our
> cluster is tiny in comparison to Anthony M's cluster. But I just wanted to
> mentioned that others out
FWIW - we've been using HAProxy in front of a cassandra cluster in
production and haven't run into any problems yet. It sounds like our
cluster is tiny in comparison to Anthony M's cluster. But I just wanted to
mentioned that others out there are doing the same.
One thing in this thread that I t
On Sun, Aug 29, 2010 at 12:20:10PM -0700, Benjamin Black wrote:
> On Sun, Aug 29, 2010 at 11:04 AM, Anthony Molinaro
> wrote:
> >
> >
> > I don't know it seems to tax our setup of 39 extra large ec2 nodes, its
> > also closer to 24000 reqs/sec at peak since there are different tables
> > (2 table
Sent from my iPhone
On Aug 29, 2010, at 3:20 PM, Benjamin Black wrote:
> On Sun, Aug 29, 2010 at 11:04 AM, Anthony Molinaro
> wrote:
>>
>>
>> I don't know it seems to tax our setup of 39 extra large ec2 nodes, its
>> also closer to 24000 reqs/sec at peak since there are different tables
>>
On Sun, Aug 29, 2010 at 11:04 AM, Anthony Molinaro
wrote:
>
>
> I don't know it seems to tax our setup of 39 extra large ec2 nodes, its
> also closer to 24000 reqs/sec at peak since there are different tables
> (2 tables for each read and 2 for each write)
>
Could you clarify what you mean here?
On Sun, Aug 29, 2010 at 11:04 AM, Anthony Molinaro
wrote:
> If one machine is misbehaving it tends to fail pretty quickly, at which
> point all the haproxies drop it (we have an haproxy on every client node,
> so it acts like a connection pooling mechanism for the client).
Cool. Except this is n
On Sat, Aug 28, 2010 at 02:44:41PM -0700, Benjamin Black wrote:
> On Sat, Aug 28, 2010 at 2:34 PM, Anthony Molinaro
> wrote:
> > I think maybe he thought you meant put a layer between cassandra internal
> > communication.
>
> No, I took the question to be about client connections.
Sorry didn't
On 8/28/10 2:44 PM, Benjamin Black wrote:
On Sat, Aug 28, 2010 at 2:34 PM, Anthony Molinaro
wrote:
I think maybe he thought you meant put a layer between cassandra internal
communication.
No, I took the question to be about client connections.
There's no problem balancing client connection
On Aug 28, 2010, at 12:29 PM, Mark wrote:
> Also, what would be a good way of monitoring the health of the cluster?
We use Ganglia. I believe failover is usually built into clients. Not sure why
using HAProxy or LVS wouldn't be a good option though. I used to use it with
MySQL slaves with much
On Sat, Aug 28, 2010 at 2:34 PM, Anthony Molinaro
wrote:
> I think maybe he thought you meant put a layer between cassandra internal
> communication.
No, I took the question to be about client connections.
> There's no problem balancing client connections with
> haproxy, we've been pushing sever
munin is the simplest thing. There are numerous JMX stats of interest.
As a symmetric distributed system, you should not expect to monitor
Cassandra like you would a web server. Intelligent clients use
connection pools and react to current node behavior in making choices
of where to send request
I think maybe he thought you meant put a layer between cassandra internal
communication. There's no problem balancing client connections with
haproxy, we've been pushing several billion requests per month through
haproxy to cassandra.
we use
mode tcp
balance leastconn
server local 127.0.0.
Because you create a bottleneck at the HAProxy and because the
presence of the proxy precludes clients properly backing off from
nodes returning errors. The proper approach is to have clients
maintain connection pools with connections to multiple nodes in the
cluster, and then to spread requests a
On 8/28/10 11:20 AM, Benjamin Black wrote:
no and no.
On Sat, Aug 28, 2010 at 10:28 AM, Mark wrote:
I will be loadbalancing between nodes using HAProxy. Is this recommended?
Also is there a some sort of ping/health check uri available?
Thanks
Also, what would be a good way of monitoring
On 8/28/10 11:20 AM, Benjamin Black wrote:
no and no.
On Sat, Aug 28, 2010 at 10:28 AM, Mark wrote:
I will be loadbalancing between nodes using HAProxy. Is this recommended?
Also is there a some sort of ping/health check uri available?
Thanks
any reason on why loadbalancing client connec
no and no.
On Sat, Aug 28, 2010 at 10:28 AM, Mark wrote:
> I will be loadbalancing between nodes using HAProxy. Is this recommended?
>
> Also is there a some sort of ping/health check uri available?
>
> Thanks
>
I will be loadbalancing between nodes using HAProxy. Is this recommended?
Also is there a some sort of ping/health check uri available?
Thanks
19 matches
Mail list logo