I have 7 node cluster for riak with a ring_size of 128.

System Details:
Each node is a VM with 16GB of memory.
The backend is using leveldb.
sys_system_architecture : <<"x86_64-unknown-linux-gnu">>
sys_system_version : <<"Erlang R16B02_basho8 (erts-5.10.3) [source] [64-bit] 
[smp:4:4] [async-threads:64] [kernel-poll:true] [frame-pointer]">>
riak_control_version : <<"2.1.1-0-g5898c40">>
cluster_info_version : <<"2.0.2-0-ge231144">>
yokozuna_version : <<"2.1.0-0-gcb41c27">>

Scenario:
We have up to 400-1000 json records being written/sec. Each record might be a 
few 100 bytes.
I see the following crash message in the erlang logs after a few hours of 
processing. Any suggestions on what could be going on here ?

===== Tue Sep 29 20:20:56 UTC 2015
[os_mon] memory supervisor port (memsup): Erlang has closed^M
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed^M
^M
Crash dump was written to: /var/log/riak/erl_crash.dump^M
eheap_alloc: Cannot allocate 3936326656 bytes of memory (of type "heap").^M

Also tested running this at 50GB per Riak Node(VM) and things work but memory 
keeps growing, so throwing hardware at it doesn't seem very scalable.

Thanks,

- Girish Shankarraman

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to