Yes from the free command

[root@riak1 ~]# free -g
              total        used        free      shared  buff/cache   available
Mem:             45           9           0           0          36          35
Swap:            23           0          22

Or from top

PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
24421 riak      20   0 16.391g 8.492g  41956 S  82.7 18.5  10542:10 beam.smp


I don't think that we are IO bound
dstat

----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read  writ| recv  send|  in   out | int   csw
  0   0  99   0   0   0| 150k  633k|   0     0 |   1B   25B| 702  2030
  0   5  95   0   0   0|   0     0 |  10k 2172B|   0     0 |1125   765
  2   6  92   0   0   0|   0     0 | 213k  135k|   0     0 |2817  7502
  2   5  92   0   0   0|   0     0 | 159k   88k|   0     0 |2758  9834
  2   5  93   0   0   0|   0  4884k| 278k   70k|   0     0 |2923  7453
  0   5  95   0   0   0|4096B   10M|  21k 1066B|   0     0 |3121   781
  4   7  89   0   0   0|   0    10M| 258k  160k|   0     0 |  13k   16k
  0   5  95   0   0   0|   0  4096B| 200k   65k|   0     0 |1413  1589
  1   5  92   1   0   0|   0    26k| 287k  206k|   0     0 |2124  4990
  1   4  95   0   0   0|   0  2048B|  67k   78k|   0     0 |1667  4504
  1   4  95   0   0   0|   0  1560k| 102k  105k|   0     0 |1639  4146
  3   8  88   1   0   0|   0    86M| 453k  335k|   0     0 |6097    16k
  4  14  81   0   0   0|   0    15k| 635k  564k|   0     0 |5383    14k
  0   4  96   0   0   0|   0     0 |  29k 1697B|   0     0 |1121   769
  4   7  89   0   0   0|   0     0 | 339k  376k|   0     0 |8017    15k
  5  16  79   0   0   0|   0    11M| 847k  824k|   0     0 |  13k   30k
  2  12  86   1   0   0|4096B   10M| 301k  272k|   0     0 |4639    11k
  3  10  87   0   0   0|   0    10M| 508k  610k|   0     0 |8260    17k
  2   9  87   2   0   0|   0    13k| 523k  354k|   0     0 |3432    10k
  0   4  96   0   0   0|   0     0 |3434B 1468B|   0     0 |1063   774





-----Original Message-----
From: Luke Bakken [mailto:lbak...@basho.com] 
Sent: July-18-16 11:14 AM
To: Travis Kirstine <tkirst...@firstbasesolutions.com>
Cc: riak-users@lists.basho.com; ac...@jdbarnes.com
Subject: Re: riak bitcask calculation

Hi Travis,

Could you go into detail about how you're coming up with 9GiB per node? Is this 
from the output of the "free" command?

Bitcask uses the operating system's buffers for file operations, and will 
happily use as much free ram as it can get to speed up operations. However, the 
OS will use that memory for other programs that need it should that need arise.

If you're using Linux, these settings may improve bitcask performance in your 
cluster:

http://docs.basho.com/riak/kv/2.1.4/using/performance/#optional-i-o-settings

Benchmarking before and after making changes is the recommended way to proceed.

Thanks -

--
Luke Bakken
Engineer
lbak...@basho.com


On Fri, Jul 15, 2016 at 8:48 AM, Travis Kirstine 
<tkirst...@firstbasesolutions.com> wrote:
> I've put ~74 million objects in my riak cluster with a bucket size of 
> 9 bytes and keys size of 21 bytes.  According the riak capacity 
> calculator this should require ~4 GiB of RAM per node.  Right now my 
> servers are showing ~ 9 GiB used per node.  Is this caused by hashing 
> or something else......
>
> # capacity calculator output
>
> To manage your estimated 73.9 million key/bucket pairs where bucket 
> names are ~9 bytes, keys are ~21 bytes, values are ~36 bytes and you 
> are setting aside 16.0 GiB of RAM per-node for in-memory data 
> management within a cluster that is configured to maintain 3 replicas 
> per key (N = 3) then Riak, using the Bitcask storage engine, will require at 
> least:
>
> 5 nodes
> 3.9 GiB of RAM per node (19.7 GiB total across all nodes)
> 11.4 GiB of storage space per node (56.8 GiB total storage space used 
> across all nodes)
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to