Copying back to mailing list for others and archive.

On Apr 4, 2014, at 11:37 AM, Oleksiy Krivoshey <oleks...@gmail.com> wrote:

> Great! 
> 
> I'm trying 2.0 right now and have found that 'total_leveldb_mem' and 
> 'total_leveldb_mem_percent' are really much easier to use and understand. 
> 
> Thanks!
> 
> 
> On 4 April 2014 18:14, Matthew Von-Maszewski <matth...@basho.com> wrote:
> Oleksiy,
> 
> Go to step 6:  "Compare Step 2 and Step 5 …".   There is a link to an Excel 
> spreadsheet at the end of the sentence "The above calculations are automated 
> in this memory model spreadsheet.".   Forget the text and use the spreadsheet 
> (memory model spreadsheet).
> 
> Much of that text is still related to memory management of 1.2 and 1.3.  
> Seems it did not get updated to 1.4.  Hmm, that might be my fault.
> 
> Answers to your comments/questions below:
> 
> 1.  Step 3 on the page is just wrong with 1.4:  open_file_memory = 
> (max_open_files -10) * 4194304
> 
> 2.  average_sst_filesize is not relevant with 1.4.  It was used to estimate 
> the size of the bloom filter attached to each .sst file.  There is now a 
> fixed maximum of 150,001 bytes for the bloom filter, and it is the typical 
> size for all files in levels 2 through 6.
> 
> 3.  The page is attempting to estimate the total memory usage of one vnode.  
> The spreadsheet does the same.  Therefore the maximum memory per either model 
> is the "working memory per vnode" in Step 5, or the "working per vnode" line 
> in the spreadsheet, multiplied by the number of vnodes active on the node 
> (server).
> 
> 
> Now let me make a related note / sales pitch.  The upcoming Riak 2.0 
> eliminates all the manual calculations / planning.  You tell Riak what 
> percentage of memory is allocated to leveldb.  leveldb then dynamically 
> adjusts each vnode's allocation as your dataset changes and/or vnodes are 
> moved to and from the node (server).  
> 
> Matthew
> 
> 
> On Apr 4, 2014, at 5:08 AM, Oleksiy Krivoshey <oleks...@gmail.com> wrote:
> 
>> Can someone please suggest how to understand the formula for 
>> open_file_memory on this page: 
>> http://docs.basho.com/riak/latest/ops/advanced/backends/leveldb/#Parameter-Planning
>> 
>> 1. It definitely lacks some brackets, the correct formula is:
>> 
>> OPEN_FILE_MEMORY =  (max_open_files-10) * (184 + (average_sst_filesize/2048) 
>> * (8+((key_size+value_size)/2048 +1)*0.6))
>> 
>> 
>> 2. How to estimate average_sst_filesize?
>> 
>> 
>> 3. does the result estimate the memory used by a single open file in any 
>> particular vnode? Or by a single vnode with max_open_files open? As 
>> max_open_files is a per vnode parameter then how to estimate the maximum 
>> memory used by leveldb if all vnodes have all max_open_files open? is it 
>> result*ring_size or result*ring_size*max_open_files?
>> 
>> 
>> Thanks!
>> 
>> -- 
>> Oleksiy
>> _______________________________________________
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> 
> 
> -- 
> Oleksiy Krivoshey

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to