Spiro,

I am somewhat clueless on OSX, but I use the following command when
starting Riak, and it seems to work for me:

sudo launchctl limit maxfiles 65536 65536
ulimit -n 65536

Bryan

On Wed, Sep 10, 2014 at 1:54 AM, Toby Corkindale <t...@dryft.net> wrote:

> Are you trying to use Riak CS for file storage, or are you just using Riak
> and storing 20M against a single key?
> It's not clear from your email.
>
> I ask because if you're in the latter case, it's just not going to work --
> I believe the maximum per key is around a single megabyte.
>
> On 10 September 2014 07:30, Spiro N <sp...@greenvirtualsolutions.com>
> wrote:
>
>> S
>> *orry, I am sure you have posted in regards to this topic before but I am
>> at a stand still. It just started after doing a "get"*
>> *the video was about 20 MB. The beam.smp  spikes at 100 and riak crashes.
>> I have done everything the Docs ask for and I provided all that I feel may
>> be relevant below. However I don't know what I don't know and could use
>> some help. Mountain Lion does not let you set the ulmit to unlimited.*
>>
>> *Thanks in advance for anything at all that may help.*
>>
>> *Spiro*
>>
>>
>> *This is my limit, I am running Mountain Lion 10.8.5*
>>
>> server:riak gvs$ launchctl limit
>>     cpu         unlimited      unlimited
>>     filesize    unlimited      unlimited
>>     data        unlimited      unlimited
>>     stack       8388608        67104768
>>     core        0              unlimited
>>     rss         unlimited      unlimited
>>     memlock     unlimited      unlimited
>>     maxproc     709            1064
>>     *maxfiles    65336          1000000    *
>> ----------------------------------------------------------
>>
>> *This is my Bitcask  content*
>> server:lib gvs$ cd /usr/local/var/lib/riak/
>> server:riak gvs$ ls bitcask/*/* |wc -l
>>      206
>> --------------------------------------------------------------
>> *This is the crash.log message*
>>
>>
>> 2014-09-09 14:34:51 =ERROR REPORT====
>> ** Generic server memsup terminating
>> ** Last message in was
>> {'EXIT',<0.20807.0>,{emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd
>> 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]}}
>> ** When Server state ==
>> {state,{unix,darwin},false,undefined,undefined,false,60000,30000,0.8,0.05,<0.20807.0>,#Ref<0.0.0.120573>,undefined,[reg],[]}
>> ** Reason for termination ==
>> ** {emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd
>> 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]}
>> 2014-09-09 14:34:51 =CRASH REPORT====
>>   crasher:
>>     initial call: memsup:init/1
>>     pid: <0.20806.0>
>>     registered_name: memsup
>>     exception exit: {{emfile,[{erlang,open_port,[{spawn,"/bin/sh -s
>> unix:cmd
>> 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]},[{gen_server,terminate,6,[{file,"gen_server.erl"},{line,747}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}
>>     ancestors: [os_mon_sup,<0.96.0>]
>>     messages: []
>>     links: [<0.97.0>]
>>     dictionary: []
>>     trap_exit: true
>>     status: running
>>     heap_size: 377
>>     stack_size: 24
>>     reductions: 204
>>   neighbours:
>> 2014-09-09 14:34:51 =SUPERVISOR REPORT====
>>      Supervisor: {local,os_mon_sup}
>>      Context:    child_terminated
>>      Reason:     {emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd
>> 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]}
>>      Offender:
>> [{pid,<0.20806.0>},{name,memsup},{mfargs,{memsup,start_link,[]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]
>>
>> 2014-09-09 14:34:51 =SUPERVISOR REPORT====
>>      Supervisor: {local,os_mon_sup}
>>      Context:    shutdown
>>      Reason:     reached_max_restart_intensity
>>      Offender:   [{pid,<0.20806.0>},{name,memsup},{mfargs,{memsup,st
>>
>> ------------------------------------------------------------------------------------------------
>> *This is the error.log message*
>>
>> server:riak gvs$ tail error.log
>> 2014-09-09 17:00:25.907 [error] <0.439.1> gen_server memsup terminated
>> with reason: maximum number of file descriptors exhausted, check ulimit -n
>> 2014-09-09 17:00:25.908 [error] <0.439.1> CRASH REPORT Process memsup
>> with 0 neighbours exited with reason: maximum number of file descriptors
>> exhausted, check ulimit -n in gen_server:terminate/6 line 747
>> 2014-09-09 17:00:25.908 [error] <0.97.0> Supervisor os_mon_sup had child
>> memsup started with memsup:start_link() at <0.439.1> exit with reason
>> maximum number of file descriptors exhausted, check ulimit -n in context
>> child_terminated
>> 2014-09-09 17:00:25.908 [error] <0.442.1> gen_server memsup terminated
>> with reason: maximum number of file descriptors exhausted, check ulimit -n
>> 2014-09-09 17:00:25.908 [error] <0.442.1> CRASH REPORT Process memsup
>> with 0 neighbours exited with reason: maximum number of file descriptors
>> exhausted, check ulimit -n in gen_server:terminate/6 line 747
>> 2014-09-09 17:00:25.909 [error] <0.97.0> Supervisor os_mon_sup had child
>> memsup started with memsup:start_link() at <0.442.1> exit with reason
>> maximum number of file descriptors exhausted, check ulimit -n in context
>> child_terminated
>> 2014-09-09 17:00:25.909 [error] <0.445.1> gen_server memsup terminated
>> with reason: maximum number of file descriptors exhausted, check ulimit -n
>> 2014-09-09 17:00:25.909 [error] <0.445.1> CRASH REPORT Process memsup
>> with 0 neighbours exited with reason: maximum number of file descriptors
>> exhausted, check ulimit -n in gen_server:terminate/6 line 747
>> 2014-09-09 17:00:25.909 [error] <0.97.0> Supervisor os_mon_sup had child
>> memsup started with memsup:start_link() at <0.445.1> exit with reason
>> maximum number of file descriptors exhausted, check ulimit -n in context
>> child_terminated
>> 2014-09-09 17:00:25.910 [error] <0.97.0> Supervisor os_mon_sup had child
>> memsup started with memsup:start_link() at <0.445.1> exit with reason
>> reached_max_restart_intensity in context shutdown
>>
>> --------------------------------------------------------------------------------------------------
>> *This is the erlang.log*
>>
>> server:riak gvs$ tail erlang.log.1
>> Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:4:4] [async-threads:64]
>> [kernel-poll:true]
>>
>> Eshell V5.9.1  (abort with ^G)
>> (riak@172.16.205.254)1>
>> ===== ALIVE Tue Sep  9 16:29:28 EDT 2014
>>
>> ===== ALIVE Tue Sep  9 16:44:28 EDT 2014
>>
>> ===== ALIVE Tue Sep  9 16:59:28 EDT 2014
>> {"Kernel pid
>> terminated",application_controller,"{application_terminated,os_mon,shutdown}"}
>> server:riak gvs$
>>
>>     This message and any attachments are intended only for the use of the
>> addressee and may contain information that is privileged and confidential.
>> If the reader of the message is not the intended recipient or an authorized
>> representative of the intended recipient, you are hereby notified that any
>> dissemination of this communication is strictly prohibited. If you have
>> received this communication in error, notify the sender immediately by
>> return email and delete the message and any attachments from your system.
>>
>> _______________________________________________
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
>
> --
> Turning and turning in the widening gyre
> The falcon cannot hear the falconer
> Things fall apart; the center cannot hold
> Mere anarchy is loosed upon the world
>
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to