On Fri, Jun 4, 2010 at 6:51 PM, wrote:
> I never said it would be frequent. That was an assumption made by Ben.
>
You indicated in an earlier email that you expected half the nodes to
be offline at any time. It is unclear how you expected that to work
for either the consistency processes or the
al Message-
From: Jonathan Shook
Date: Fri, 4 Jun 2010 20:28:33
To:
Subject: Re: Seeds, autobootstrap nodes, and replication factor
If I may ask, why the need for frequent topology changes?
On Fri, Jun 4, 2010 at 1:21 PM, Benjamin Black wrote:
> On Fri, Jun 4, 2010 at 11:14 AM, Philip St
If I may ask, why the need for frequent topology changes?
On Fri, Jun 4, 2010 at 1:21 PM, Benjamin Black wrote:
> On Fri, Jun 4, 2010 at 11:14 AM, Philip Stanhope wrote:
>> I guess I'm thick ...
>>
>> What would be the right choice? Our data demands have already been proven to
>> scale beyond
On Fri, Jun 4, 2010 at 11:14 AM, Philip Stanhope wrote:
> I guess I'm thick ...
>
> What would be the right choice? Our data demands have already been proven to
> scale beyond what RDB can handle for our purposes. We are quite pleased with
> Cassandra read/write/scale out. Just trying to underst
I guess I'm thick ...
What would be the right choice? Our data demands have already been proven to
scale beyond what RDB can handle for our purposes. We are quite pleased with
Cassandra read/write/scale out. Just trying to understand the operational
considerations.
On Jun 4, 2010, at 2:11 PM,
On Fri, Jun 4, 2010 at 11:04 AM, Philip Stanhope wrote:
>
> I am contemplating a situation where there may be 2N servers ... but only N
> online at any one time. But, for operational purposes, N+n (where n is 1 or
> 2), N may be occasionally greater than R.
>
Then Cassandra is probably not the
Thanks on the correction about Keyspace versus ColumnFamily ... I knew that
just mis-typed.
I guess it should be stated (to be obvious) ... that when you are auto
bootstrapping a node ... the seed better be alive. The scenario I'm dealing
with is that it might not be (reasons for that are tange
On Fri, Jun 4, 2010 at 10:36 AM, Philip Stanhope wrote:
>
> Here's the scenario: would like R = N where N is the number of nodes. Let's
> say 8.
>
> 1. Create first node, modify storage-conf.xml and change the to be
> the ip of the node. Change replication factor to 8 for CF of interest. Start
Here's the scenario: would like R = N where N is the number of nodes. Let's say
8.
1. Create first node, modify storage-conf.xml and change the to be the
ip of the node. Change replication factor to 8 for CF of interest. Start the
puppy up.
2. Create 2nd node, modify storage-confg.xml and ch