My understanding is that it is not possible to change the number of tokens
after the node has been initialized. To do so you would first need to
decommission the node, then start it clean with the appropriate num_tokens
in the yaml.
On Fri, Jul 12, 2013 at 9:17 PM, Radim Kolar wrote:
> its pos
> Pretty sure you can put the list in the yaml file too.
Yup, sorry.
initial_tokens can take a comma separated value
Cheers
-
Aaron Morton
Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 15/07/2013, at 9:44 AM, Eric Stevens wrote:
> My understand
> Aaron Morton can confirm but I think one problem could be that to create an
> index on a field with small number of possible values is not good.
Yes.
In cassandra each value in the index becomes a single row in the internal
secondary index CF. You will end up with a huge row for all the values
For those following along at home, recently another project in this space was
announced https://github.com/deanhiller/databus
Cheers
-
Aaron Morton
Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 13/07/2013, at 4:01 PM, Ananth Gundabattula
wrote:
On Mon, Jul 15, 2013 at 12:26 AM, aaron morton wrote:
> Aaron Morton can confirm but I think one problem could be that to create
> an index on a field with small number of possible values is not good.
>
> Yes.
> In cassandra each value in the index becomes a single row in the internal
> secondary
Thanks for the pointer Aaron.
Regards,
Ananth
On 15-Jul-2013, at 8:30 AM, "aaron morton"
mailto:aa...@thelastpickle.com>> wrote:
For those following along at home, recently another project in this space was
announced https://github.com/deanhiller/databus
Cheers
-
Aaron Morton
I'm running into a problem where instances of my cluster are hitting over 450K
open files. Is this normal for a 4 node 1.2.6 cluster with replication factor
of 3 and about 50GB of data on each node? I can push the file descriptor limit
up, but I plan on having a much larger load so I'm wonderi
Are you using leveled compaction? If so, what do you have the file size
set at? If you're using the defaults, you'll have a ton of really small
files. I believe Albert Tobey recommended using 256MB for the
table sstable_size_in_mb to avoid this problem.
On Sun, Jul 14, 2013 at 5:10 PM, Paul In