On Mon, Mar 19, 2012 at 8:24 PM, Eric Evans wrote:
> I'm guessing you're referring to Rick's proposal about ranges per node?
>
May be, what i mean is little more simple than that... We can consider
every node having a multiple conservative ranges and moving those ranges
for bootstrap etc, instea
On Mon, Mar 19, 2012 at 9:37 PM, Vijay wrote:
> I also did create a ticket
> https://issues.apache.org/jira/browse/CASSANDRA-3768 with some of the
> reason why I would like to see vnodes in cassandra.
> It can also potentially reduce the SSTable seeks which a node has to do to
> query data in Size
On Mon, Mar 19, 2012 at 4:45 PM, Peter Schuller
wrote:
> > As a side note: vnodes fail to provide solutions to node-based limitations
> > that seem to me to cause a substantial portion of operational issues such
> > as impact of node restarts / upgrades, GC and compaction induced latency. I
>
> Ac
I also did create a ticket
https://issues.apache.org/jira/browse/CASSANDRA-3768 with some of the
reason why I would like to see vnodes in cassandra.
It can also potentially reduce the SSTable seeks which a node has to do to
query data in SizeTireCompaction if extended to the filesystem.
But 110% a
(I may comment on other things more later)
> As a side note: vnodes fail to provide solutions to node-based limitations
> that seem to me to cause a substantial portion of operational issues such
> as impact of node restarts / upgrades, GC and compaction induced latency. I
Actually, it does. At l
>> Using this ring bucket in the CRUSH topology, (with the hash function
>> being the identity function) would give the exact same distribution
>> properties as the virtual node strategy that I suggested previously,
>> but of course with much better topology awareness.
>
> I will have to re-read yo
> a) a virtual node partitioning scheme (to support heterogeneity and
> management simplicity)
> b) topology aware replication
> c) topology aware routing
I would add (d) limiting the distribution factor to decrease the
probability of data loss/multiple failures within a replica set.
> First of a
I think if we could go back and rebuild Cassandra from scratch, vnodes
would likely be implemented from the beginning. However, I'm concerned that
implementing them now could be a big distraction from more productive uses
of all of our time and introduce major potential stability issues into what
i
On Mon, Mar 19, 2012 at 4:24 PM, Sam Overton wrote:
>> For OPP the problem of load balancing is more profound. Now you need
>> vnodes per keyspace because you can not expect each keyspace to have
>> the same distribution. With three keyspaces you are not unsure as to
>> which was is causing the ho
> For OPP the problem of load balancing is more profound. Now you need
> vnodes per keyspace because you can not expect each keyspace to have
> the same distribution. With three keyspaces you are not unsure as to
> which was is causing the hotness. I think OPP should just go away.
That's a good po
On Mon, Mar 19, 2012 at 4:15 PM, Sam Overton wrote:
> On 19 March 2012 09:23, Radim Kolar wrote:
>>
>>>
>>> Hi Radim,
>>>
>>> The number of virtual nodes for each host would be configurable by the
>>> user, in much the same way that initial_token is configurable now. A host
>>> taking a larger nu
On 19 March 2012 09:23, Radim Kolar wrote:
>
>>
>> Hi Radim,
>>
>> The number of virtual nodes for each host would be configurable by the
>> user, in much the same way that initial_token is configurable now. A host
>> taking a larger number of virtual nodes (tokens) would have
>> proportionately
>
Hi Peter,
It's great to hear that others have come to some of the same conclusions!
I think a CRUSH-like strategy for topologically aware
replication/routing/locality is a great idea. I think I can see three
mostly orthogonal sets of functionality that we're concerned with:
a) a virtual node par
Hi Radim,
The number of virtual nodes for each host would be configurable by the
user, in much the same way that initial_token is configurable now. A host
taking a larger number of virtual nodes (tokens) would have proportionately
more of the data. This is how we anticipate support for heterog
14 matches
Mail list logo