I probably could have saved myself some time by saying (as Peter and Edward
pointed out) "if you use nodes with different capabilities you will need treat
all nodes as having the lowest spec and that could be a waste." :)
Aaron
On 23 Mar 2011, at 07:26, Peter Schuller wrote:
>> Wait! maybe th
> Wait! maybe this is a quadruple-whammy since we have to account for
> the data being replicated to other nodes. At replication factor 3 only
> 1/3rd of the data on the node actually belongs in that TokenRange, So
> it is not as simple as having small nodes with smaller ranges, you
> also have to
On Tue, Mar 22, 2011 at 12:23 PM, Peter Schuller
wrote:
>> I may be wrong on this, so anyone else feel free to jump in. Here are some
>> issues to consider...
>>
>> - keyspace memory requirements are global, all nodes must have enough memory
>> to support the CFs.
>> - During node moves, additio
> I may be wrong on this, so anyone else feel free to jump in. Here are some
> issues to consider...
>
> - keyspace memory requirements are global, all nodes must have enough memory
> to support the CFs.
> - During node moves, additions or deletions the token range may increase,
> nodes with les
a way to configure wimpy nodes such that the replicas are
>> elsewhere?
>>
>>
>> --
>> View this message in context:
>> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/cassandra-nodes-with-mixed-hard-disk-sizes-tp6194071p6195543.html
>> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
>> Nabble.com.
My assumption is from not seeing anything in the code to explicitly support
nodes of different specs (also think I saw it somewhere ages ago). AFAIK the
dynamic snitch is there to detect nodes with a temporarily reduced throughput
and try to reduce the read load on them.
I may be wrong on this
ot have a
>> dramatic affect on the storage requirements.
>>
>
> Aaron,
>
> is there a way to configure wimpy nodes such that the replicas are
> elsewhere?
>
>
> --
> View this message in context:
> http://cassandra-user-incubator-apache-org.3065146.n2
On Mar 22, 2011, at 5:09 AM, aaron morton wrote:
> 1) You should use nodes with the same capacity (CPU, RAM, HDD), cassandra
> assumes they are all equal.
Care to elaborate? While equal node will certainly make life easier I would
have thought that dynamic snitch would take care of performan
mpy nodes such that the replicas are
elsewhere?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/cassandra-nodes-with-mixed-hard-disk-sizes-tp6194071p6195543.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
1) You should use nodes with the same capacity (CPU, RAM, HDD), cassandra
assumes they are all equal.
2) Not sure what exactly would happen. Am guessing either the node would
shutdown or writes would eventually block, probably the former. If the node was
up read performance may suffer (if ther
This is a two part question ...
1. If you have cassandra nodes with different sized hard disks, how do you
deal with assigning the token ring such that the nodes with larger disks get
more data? In other words, given equally distributed token ranges, when the
smaller disk nodes run out of s
11 matches
Mail list logo