Thank you for the response, Jon.

So I changed it to 50% and it crashed again.
I have a 5 nodes cluster with 60GB RAM on each node. Ring size is set to 64. 
(Attached riak conf if any one has some ideas).

I still see the erlang process consuming the entire capacity of the system (52 
GB).

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
24256 riak      20   0 67.134g 0.052t  18740 D   0.0 90.0   2772:44 beam.smp

---- Cluster Status ----
Ring ready: true

+--------------------+------+-------+-----+-------+
|        node        |status| avail |ring |pending|
+--------------------+------+-------+-----+-------+
| (C) riak@20.0.0.11<mailto:riak@20.0.0.11> |valid |  up   | 20.3|  --   |
|     riak@20.0.0.12<mailto:riak@20.0.0.12> |valid |  up   | 20.3|  --   |
|     riak@20.0.0.13<mailto:riak@20.0.0.13> |valid |  up   | 20.3|  --   |
|     riak@20.0.0.14<mailto:riak@20.0.0.14> |valid |  up   | 20.3|  --   |
|     riak@20.0.0.15<mailto:riak@20.0.0.15> |valid |  up   | 18.8|  --   |

Thanks,

- Girish Shankarraman


From: Jon Meredith <jmered...@basho.com<mailto:jmered...@basho.com>>
Date: Thursday, October 1, 2015 at 2:06 PM
To: girish shankarraman 
<gshankarra...@vmware.com<mailto:gshankarra...@vmware.com>>, 
"riak-users@lists.basho.com<mailto:riak-users@lists.basho.com>" 
<riak-users@lists.basho.com<mailto:riak-users@lists.basho.com>>
Subject: Re: riak 2.1.1 : Erlang crash dump

It looks like Riak was unable to allocate 4Gb of memory.  You may have to 
reduce the amount of memory allocated for leveldb from the default 70%, try 
setting this in your /etc/riak/riak.conf file


leveldb.maximum_memory.percent = 50

The memory footprint for Riak should stabilize after a few hours and on servers 
with smaller amounts of memory, the 30% left over may not be enough.

Please let us know how you get on.

On Wed, Sep 30, 2015 at 5:31 PM Girish Shankarraman 
<gshankarra...@vmware.com<mailto:gshankarra...@vmware.com>> wrote:
I have 7 node cluster for riak with a ring_size of 128.

System Details:
Each node is a VM with 16GB of memory.
The backend is using leveldb.
sys_system_architecture : <<"x86_64-unknown-linux-gnu">>
sys_system_version : <<"Erlang R16B02_basho8 (erts-5.10.3) [source] [64-bit] 
[smp:4:4] [async-threads:64] [kernel-poll:true] [frame-pointer]">>
riak_control_version : <<"2.1.1-0-g5898c40">>
cluster_info_version : <<"2.0.2-0-ge231144">>
yokozuna_version : <<"2.1.0-0-gcb41c27">>

Scenario:
We have up to 400-1000 json records being written/sec. Each record might be a 
few 100 bytes.
I see the following crash message in the erlang logs after a few hours of 
processing. Any suggestions on what could be going on here ?

===== Tue Sep 29 20:20:56 UTC 2015
[os_mon] memory supervisor port (memsup): Erlang has closed^M
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed^M
^M
Crash dump was written to: /var/log/riak/erl_crash.dump^M
eheap_alloc: Cannot allocate 3936326656 bytes of memory (of type "heap").^M

Also tested running this at 50GB per Riak Node(VM) and things work but memory 
keeps growing, so throwing hardware at it doesn't seem very scalable.

Thanks,

- Girish Shankarraman

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com<mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.basho.com_mailman_listinfo_riak-2Dusers-5Flists.basho.com&d=BQMFaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=SfVz_Zuh1v6N8KV47A0SgMJ4bRi78DyyqvwsoOUxq4o&m=qjjfZoMqe94wVCrhecdI3DEX11wKi-zvKXs29_U1Mp4&s=7pJ-ctzVKXKa4kODXJsesixXKHmyJwD10U2rMqhyv-o&e=>

Attachment: riak.conf
Description: riak.conf

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to