1) What determines the amount of memory used per schema ignoring the general 
overhead to get cassandra up and running?  Is it just the size of the caches 
for 
the column Family + the memtable size ?

2) Is the size of the cache configured ( in terms of absolute numbers or 
percentages), an upper bound on the amount of memory that can be allocated and 
which grows as more data is filled up in the cache ? I believe the answer is 
yes...please correct me if I am wrong .... Assuming the answer is yes, What if 
I 
specify the cache size as X items and there is only enough memory to allocate 
for say, X-1000 items ? Will cassandra just allocate for X-1000 and keep 
swapping cache items in and out as required ? Is there a possibility of a crash 
due to lack of memory ?

3) Taking this one step further, if there is insufficient memory to allocate 
caches across column familes ( and across Keyspaces), Will cassandra pull 
memory 
of one cache and allocate it to the other one as required ? ( a little 
over-ambitious..but thought I would just ask instead of assuming) 


Thank you
Kannan


      

Reply via email to