Merge error in bitcask data store
Hi. I am new in the list, so i don't know if this is the right place to open this thread. Our riak cluster is composed by 4 nodes. The O.S is Ubuntu 14.04 and the version of riak is 2.0.0 We are getting a lot of errors in the logs of ours riak nodes. The errors are like this: 2014-09-03 13:05:14.212 [error] <0.19152.3672> Failed to merge {["/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/14.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/13.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/12.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/11.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/10.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/9.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/8.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/7.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/6.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/5.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/4.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/3.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/2.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/1.bitcask.data"],["/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/14.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/13.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/12.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/11.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/10.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553098649600/9.bitcask.data","/var/lib/riak/data/ten_minutes/570899077082383952423314387779798054553...",...]}: {generic_failure,error,function_clause,[{riak_kv_bitcask_backend,key_transform_to_1,[{tombstone,<<2,0,4,109,116,116,108,50,48,49,52,48,57,48,50,50,50,53,48,50,53,48,56,102,56,102,102,101,54,49,49,97,101,51,99,52,52,55,101,55,55,100,99,50,100,49,52,56,51,57,55,48,50>>}],[{file,"src/riak_kv_bitcask_backend.erl"},{line,99}]},{bitcask,'-expiry_merge/4-fun-0-',7,[{file,"src/bitcask.erl"},{line,1912}]},{bitcask_fileops,fold_hintfile_loop,5,[{file,"src/bitcask_fileops.erl"},{line,660}]},{bitcask_fileops,fold_file_loop,8,[{file,"src/bitcask_fileops.erl"},{line,720}]},{bitcask_fileops,fold_hintfile,3,[{file,"src/bitcask_fileops.erl"},{line,624}]},{bitcask,expiry_merge,4,[{file,"src/bitcask.erl"},{line,1915}]},{bitcask,merge1,4,[{file,"src/bitcask.erl"},{line,686}]},{bitcask,merge,3,[{file,"src/bitcask.erl"},{line,566}]}]} It seems riak can not merge data in bitcask data store. This is the configuration of bitcask in ten minutes TTL: {<<"ten_minutes_ttl">>,riak_kv_bitcask_backend, [{io_mode,erlang}, {expiry_grace_time,0}, {small_file_threshold,5242880}, {dead_bytes_threshold,4194304}, {frag_threshold,15}, {dead_bytes_merge_trigger,4194304}, {frag_merge_trigger,10}, {max_file_size,10485760}, {open_timeout,4}, {data_root,"/var/lib/riak/data/ten_minutes"}, {sync_strategy,none}, {merge_window,always}, {max_fold_age,-1}, {max_fold_puts,0}, {expiry_secs,660}, {require_hint_crc,true}]} As a result, the amount of used memory(RAM) keeps growing until the server run out of free memory. Could you give me some clue that it can point to the cause of the problem? Thanks in advance. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Memory-backend TTL
Hello, I have a memory backend in production with Riak 2.0.1, 4 servers and 256 vnodes. The servers have the same date and time. I have seen an odd performance with the ttl. This is the config: {<<"ttl_stg">>,riak_kv_memory_backend, [{ttl,90},{max_memory,25}]}, For example, see this GET response in one of the riak servers: < HTTP/1.1 200 OK < X-Riak-Vclock: a85hYGBgzGDKBVIc4otdfgR/7bfIYEpkzGNlKI1efJYvCwA= < Vary: Accept-Encoding * Server MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained) is not blacklisted < Server: MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained) < Link: ; rel="up" < Last-Modified: Fri, 03 Oct 2014 17:40:05 GMT < ETag: "3c8bGoifWcOCSVn0otD5nI" < Date: Fri, 03 Oct 2014 17:47:50 GMT < Content-Type: application/json < Content-Length: 17 If the TTL is 90 seconds, Why the GET doesn't return "not found" if the difference between "Last-Modified" and "Date" (of the curl request) is greater than the TTL? Thanks in advance! ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Memory-backend TTL
Hi Luke, Of course. The request is a simple curl: curl -v -X GET "http://localhost:8098/riak/ttl_stg/KEY"; 2014-10-06 16:59 GMT+02:00 Luke Bakken : > Hi Lucas, > > Can you confirm that the bucket or bucket-type that contains the > object you're retrieving has been configured to use the "ttl_stg" > backend? > -- > Luke Bakken > Engineer / CSE > lbak...@basho.com > > > On Fri, Oct 3, 2014 at 11:32 AM, Lucas Grijander > wrote: > > Hello, > > > > I have a memory backend in production with Riak 2.0.1, 4 servers and 256 > > vnodes. The servers have the same date and time. > > > > I have seen an odd performance with the ttl. > > > > This is the config: > > > >{<<"ttl_stg">>,riak_kv_memory_backend, > > [{ttl,90},{max_memory,25}]}, > > > > For example, see this GET response in one of the riak servers: > > > > < HTTP/1.1 200 OK > > < X-Riak-Vclock: a85hYGBgzGDKBVIc4otdfgR/7bfIYEpkzGNlKI1efJYvCwA= > > < Vary: Accept-Encoding > > * Server MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained) is > not > > blacklisted > > < Server: MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained) > > < Link: ; rel="up" > > < Last-Modified: Fri, 03 Oct 2014 17:40:05 GMT > > < ETag: "3c8bGoifWcOCSVn0otD5nI" > > < Date: Fri, 03 Oct 2014 17:47:50 GMT > > < Content-Type: application/json > > < Content-Length: 17 > > > > If the TTL is 90 seconds, Why the GET doesn't return "not found" if the > > difference between "Last-Modified" and "Date" (of the curl request) is > > greater than the TTL? > > > > Thanks in advance! > > > > > > ___ > > riak-users mailing list > > riak-users@lists.basho.com > > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > > ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Memory-backend TTL
Hi Luke. curl -vvv -XGET "http://localhost:8098/riak/ttl_stg/props"; * Hostname was NOT found in DNS cache * Trying 127.0.0.1... * Connected to localhost (127.0.0.1) port 8098 (#0) > GET /riak/ttl_stg/props HTTP/1.1 > User-Agent: curl/7.35.0 > Host: localhost:8098 > Accept: */* > < HTTP/1.1 404 Object Not Found * Server MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained) is not blacklisted < Server: MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained) < Date: Mon, 06 Oct 2014 16:20:22 GMT < Content-Type: text/plain < Content-Length: 10 < not found * Connection #0 to host localhost left intact I think you wanted to say this command: curl -vvv -XGET "http://localhost:8098/buckets/ttl_stg/props"; * Hostname was NOT found in DNS cache * Trying 127.0.0.1... * Connected to localhost (127.0.0.1) port 8098 (#0) > GET /buckets/ttl_stg/props HTTP/1.1 > User-Agent: curl/7.35.0 > Host: localhost:8098 > Accept: */* > < HTTP/1.1 200 OK < Vary: Accept-Encoding * Server MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained) is not blacklisted < Server: MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained) < Date: Mon, 06 Oct 2014 16:20:57 GMT < Content-Type: application/json < Content-Length: 458 < * Connection #0 to host localhost left intact {"props":{"allow_mult":false,"backend":"ttl_stg","basic_quorum":false,"big_vclock":1,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"dvv_enabled":false,"dw":"quorum","last_write_wins":true,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":3,"name":"ttl_stg","notfound_ok":true,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[],"pw":0,"r":3,"rw":"quorum","small_vclock":1,"w":"quorum","young_vclock":1}} I have observed an another strange thing with the memory storage (I don't know if it is related to the TTL problem): Because of the TTL function is not working fine, I have put a "delete" function inside the code to emule the TTL feature (I have removed the TTL from the config), but, randomly, a lot of keys already deleted appear again. The "delete" function is a "delete" and a "get" of the same key to assure that the key is deleted. After, from console, I check it with this command: curl -X GET http://localhost:8098/buckets/ttl_stg/keys?keys=true The problem of this is that i can't reproduce it. It happens randomly. If I active again the TTL (in all nodes, restarting the cluster) and I remove my "delete" function (I keep the get to assure that the key is deleted), the problem is the same, randomly, old already deleted keys (thanks to the TTL) appear suddenly after a few minutes. The logs are not showing anything. 2014-10-06 18:01 GMT+02:00 Luke Bakken : > Hi Lucas, > > Could you run the following curl statement and provide the full > transcript of the command and response? > > curl -vvv -XGET "http://localhost:8098/riak/ttl_stg/props"; > > -- > Luke Bakken > Engineer / CSE > lbak...@basho.com > > > On Mon, Oct 6, 2014 at 8:55 AM, Lucas Grijander > wrote: > > Hi Luke, > > > > Of course. The request is a simple curl: > > > > curl -v -X GET "http://localhost:8098/riak/ttl_stg/KEY"; > > > > 2014-10-06 16:59 GMT+02:00 Luke Bakken : > >> > >> Hi Lucas, > >> > >> Can you confirm that the bucket or bucket-type that contains the > >> object you're retrieving has been configured to use the "ttl_stg" > >> backend? > >> -- > >> Luke Bakken > >> Engineer / CSE > >> lbak...@basho.com > >> > >> > >> On Fri, Oct 3, 2014 at 11:32 AM, Lucas Grijander > >> wrote: > >> > Hello, > >> > > >> > I have a memory backend in production with Riak 2.0.1, 4 servers and > 256 > >> > vnodes. The servers have the same date and time. > >> > > >> > I have seen an odd performance with the ttl. > >> > > >> > This is the config: > >> > > >> >{<<"ttl_stg">>,riak_kv_memory_backend, > >> > [{ttl,90},{max_memory,25}]}, > >> > > >> > For example, see this GET response in one of the riak servers: > >> > > >> > < HTTP/1.1 20
Re: Memory-backend TTL
/var/lib/riak/bc_default > multi_backend.bc_default.bitcask.io_mode = erlang > > This translates to the following in > /var/lib/riak/generated.configs/app.2014.10.13.13.13.29.config: > > {multi_backend_default,<<"bc_default">>}, > {multi_backend, > [{<<"ttl_stg">>,riak_kv_memory_backend,[{ttl,90},{max_memory,4}]}, > {<<"bc_default">>,riak_kv_bitcask_backend, > [{io_mode,erlang}, > {expiry_grace_time,0}, > {small_file_threshold,10485760}, > {dead_bytes_threshold,134217728}, > {frag_threshold,40}, > {dead_bytes_merge_trigger,536870912}, > {frag_merge_trigger,60}, > {max_file_size,2147483648}, > {open_timeout,4}, > {data_root,"/var/lib/riak/bc_default"}, > {sync_strategy,none}, > {merge_window,always}, > {max_fold_age,-1}, > {max_fold_puts,0}, > {expiry_secs,-1}, > {require_hint_crc,true}]}]}]}, > > I set the bucket properties to use the ttl_stg backend: > > root@UBUNTU-12-1:~# cat ttl_stg-props.json > {"props":{"name":"ttl_stg","backend":"ttl_stg"}} > > root@UBUNTU-12-1:~# curl -XPUT -H'Content-type: application/json' > localhost:8098/buckets/ttl_stg/props --data-ascii @ttl_stg-props.json > > root@UBUNTU-12-1:~# curl -XGET localhost:8098/buckets/ttl_stg/props > > {"props":{"allow_mult":false,"backend":"ttl_stg","basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"dvv_enabled":false,"dw":"quorum","last_write_wins":false,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":3,"name":"ttl_stg","notfound_ok":true,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[],"pw":0,"r":"quorum","rw":"quorum","small_vclock":50,"w":"quorum","young_vclock":20}} > > > And used the following statement to PUT test data: > > curl -XPUT localhost:8098/buckets/ttl_stg/keys/1 -d "TEST $(date)" > > After 90 seconds, this is the response I get from Riak: > > root@UBUNTU-12-1:~# curl -XGET localhost:8098/buckets/ttl_stg/keys/1 > not found > > I would carefully check all of the app.config / riak.conf files in > your cluster, the output of "riak config effective" and the bucket > properties for those buckets you expect to be using the memory backend > with TTL. I also recommend using the localhost:8098/buckets/ endpoint > instead of the deprecated riak/ endpoint. > > Please let me know if you have additional questions. > -- > Luke Bakken > Engineer / CSE > lbak...@basho.com > > > On Fri, Oct 3, 2014 at 11:32 AM, Lucas Grijander > wrote: > > Hello, > > > > I have a memory backend in production with Riak 2.0.1, 4 servers and 256 > > vnodes. The servers have the same date and time. > > > > I have seen an odd performance with the ttl. > > > > This is the config: > > > >{<<"ttl_stg">>,riak_kv_memory_backend, > > [{ttl,90},{max_memory,25}]}, > > > > For example, see this GET response in one of the riak servers: > > > > < HTTP/1.1 200 OK > > < X-Riak-Vclock: a85hYGBgzGDKBVIc4otdfgR/7bfIYEpkzGNlKI1efJYvCwA= > > < Vary: Accept-Encoding > > * Server MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained) is > not > > blacklisted > > < Server: MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained) > > < Link: ; rel="up" > > < Last-Modified: Fri, 03 Oct 2014 17:40:05 GMT > > < ETag: "3c8bGoifWcOCSVn0otD5nI" > > < Date: Fri, 03 Oct 2014 17:47:50 GMT > > < Content-Type: application/json > > < Content-Length: 17 > > > > If the TTL is 90 seconds, Why the GET doesn't return "not found" if the > > difference between "Last-Modified" and "Date" (of the curl request) is > > greater than the TTL? > > > > Thanks in advance! > > > > > > ___ > > riak-users mailing list > > riak-users@lists.basho.com > > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > > ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Memory-backend TTL
Hi Luke. An update about the memory consumption in my 1 server "cluster" after about 10 hours after my last email. Remember, with max_memory_per_vnode = 250MB and ring_size=16. Sadly, I had to restart the riak daemon: # riak-admin diag -d debug [debug] Local RPC: os:getpid([]) [5000] [debug] Running shell command: ps -o pmem,rss -p 17521 [debug] Shell command output: %MEM RSS 93.2 30599792 [debug] Local RPC: riak_core_ring_util:check_ring([]) [5000] [debug] Cluster RPC: application:get_env([riak_search,enabled]) [5000] [critical] Riak memory usage is HIGH: 93.2% of available RAM [info] Riak process is using 93.2% of available RAM, totalling 30599792 KB of real memory. # riak-admin status|grep memory memory_total : 36546708552 memory_processes : 410573688 memory_processes_used : 405268472 memory_system : 36136134864 memory_atom : 561761 memory_atom_used : 561285 memory_binary : 14322847352 memory_code : 14371292 memory_ets : 21738236080 # riak-admin vnode-status VNode: 1370157784997721485815954530671515330927436759040 Backend: riak_kv_multi_backend Status: [{<<"one_minute_ttl">>, [{mod,riak_kv_memory_backend}, {data_table_status,[{compressed,false}, {memory,1366727}, {owner,<8343.9466.104>}, {heir,none}, {name,riak_kv_1370157784997721485815954530671515330927436759040}, {size,35042}, {node,'riak@xx'}, {named_table,false}, {type,ordered_set}, {keypos,1}, {protection,protected}]}, {index_table_status,[{compressed,false}, {memory,89}, {owner,<8343.9466.104>}, {heir,none}, {name,riak_kv_1370157784997721485815954530671515330927436759040_i}, {size,0}, {node,'riak@xxx'}, {named_table,false}, {type,ordered_set}, {keypos,1}, {protection,protected}]}, {time_table_status,[{compressed,false}, {memory,161493542}, {owner,<8343.9466.104>}, {heir,none}, {name,riak_kv_1370157784997721485815954530671515330927436759040_t}, {size,5981239}, {node,'riak@'}, {named_table,false}, {type,ordered_set}, {keypos,1}, {protection,protected}]}]}] 2014-10-14 2:02 GMT+02:00 Lucas Grijander : > Hi Luke. > > I really appreciate your efforts to attempt to reproduce the problem. I > think that the configs are right. I have been doing also a lot of tests and > with 1 server/node, the memory bucket works flawlessly, as your test. The > Riak cluster where we have the problem has a multi_backend with 1 memory > backend, 2 bitcask backends and 2 leveldb backends. I have only changed the > parameter connection of the memory backend in our production code to > another new "cluster" with only 1 node, with the same config of Riak but > with only 1 memory backend under the multi configuration and, as I said, > all fine, the problem vanished. I deduce that the problem appears only with > more than 1 node and with a lot of requests. > > In my tests with the production cluster with the problem ( 4 nodes), > finally I realized that the TTL is working but, randomly and suddenly, KEYS > already deleted appear, and KEYS with correct TTL disappear :-? (Maybe > something related with the some ETS internal table? ) This is the moment > when I can obtain KEYS already expired. > > In summary: > > - With cluster with 4 nodes (config below): All OK for a while and > suddenly we lost the last 20 seconds approx. of keys and OLD keys appear in > the list: curl -X GET http://localhost:8098/buckets/ttl_stg/keys?keys=true > > buckets.default.last_write_wins = true > bitcask.io_mode = erlang > multi_backend.ttl_stg.storage_backend = memory > multi_backend.ttl_stg.memory_backend.ttl = 90s > multi_backend.ttl_stg.memory_backend.max_memory_per_vnode = 25MB > anti_entropy = passive > ring_size = 256 > > - With 1 node: All OK > > buckets.default.n_val = 1 > buckets.default.last_write_wins = true > buckets.default.r = 1 > buckets.default.w = 1 > multi_backend. ttl_stg.storage_backend = memory > multi_backend. ttl_stg.memory_backend.ttl = 90s > multi_backend. ttl_stg.memory_backend.max_memory_per_vnode = 250MB > ring_size = 16 > > > > Another note: With this 1 node (32GB RAM) and only activated the memory > backend I have realized than the memory consumption grows without contro
Re: Memory-backend TTL
Hi Luke, Indeed, when removed the thousands of requests, the memory is stabilized. However the memory consumption is still very high: riak-admin status |grep memory memory_total : 18494760128 memory_processes : 145363184 memory_processes_used : 142886424 memory_system : 18349396944 memory_atom : 561761 memory_atom_used : 554496 memory_binary : 7108243240 memory_code : 13917820 memory_ets : 11200328880 I have test also with Riak 1.4.10 and the performance is the same. Is it normal that the "memory_ets" has more than 10GB when we have a "ring_size" of 16 and a max_memory_per_vnode = 250MB? 2014-10-15 20:50 GMT+02:00 Lucas Grijander : > Hi Luke. > > About the first issue: > > - From the beginning, the servers are all running ntpd. They are Ubuntu > 14.04 and the ntpd service is installed and running by default. > - Anti-entropy was also disabled from the beginning: > > {anti_entropy,{off,[]}}, > > > About the second issue, I am perplex because, after 2 restarts of the Riak > server, just now there is a big memory consumption but is not growing like > the previous days. The only change was to remove this code (it was used > thousands of times/s). It was a possible workaround about the previous > problem with the TTL but this code now is useless because the TTL is > working fine with this node alone: > > self.db.delete((key) > self.db.get(key, r=1) > > > # riak-admin status|grep memory > memory_total : 18617871264 > memory_processes : 224480232 > memory_processes_used : 222700176 > memory_system : 18393391032 > memory_atom : 561761 > memory_atom_used : 552862 > memory_binary : 7135206080 > memory_code : 13779729 > memory_ets : 11209256232 > > The problem is that I don't remember if the code change was after or > before the second restart. I am going to restart the riak server again and > I will report you about if the "possible memory leak" is reproduced. > > This is the props of the bucket: > > {"props":{"allow_mult":false,"backend":"ttl_stg","basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"dvv_enabled":false,"dw":"quorum","last_write_wins":true,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":1,"name":"ttl_stg","notfound_ok":true,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[],"pw":0,"r":1,"rw":"quorum","small_vclock":50,"w":1,"young_vclock":20}} > > About the data that we put into the bucket are all with this schema: > > KEY: Alphanumeric with a length of 47 > DATA: Long integer. > > # riak-admin status|grep puts > vnode_puts : 84708 > vnode_puts_total : 123127430 > node_puts : 83169 > node_puts_total : 123128062 > > # riak-admin status|grep gets > vnode_gets : 162314 > vnode_gets_total : 240433213 > node_gets : 162317 > node_gets_total : 240433216 > > 2014-10-14 16:26 GMT+02:00 Luke Bakken : > >> Hi Lucas, >> >> With regard to the mysterious key deletion / resurrection, please do >> the following: >> >> * Ensure your servers are all running ntpd and have their time >> synchronized as closely as possible. >> * Disable anti-entropy. I suspect this is causing the strange behavior >> you're seeing with keys. >> >> Your single node cluster memory consumption issue is a bit of a >> puzzler. I'm assuming you're using default bucket settings and not >> using bucket types based on your previous emails, and that allow_mult >> is still false for your ttl_stg bucket. Can you tell me more about the >> data you're putting into that bucket for testing? I'll try and >> reproduce it with my single node cluster. >> >> -- >> Luke Bakken >> Engineer / CSE >> lbak...@basho.com >> >> >> On Mon, Oct 13, 2014 at 5:02 PM, Lucas Grijander >> wrote: >> > Hi Luke. >> > >> > I really appreciate your efforts to attempt to reproduce the problem. I >> > think that the configs are right. I have been doing also a lot of tests >> and >> > with 1 server/node, the memory bucket works flawlessly, as your test. >> The >> > Riak cluster where we have the problem has a multi_backend with 1 memory >> > backend, 2 bitcask backends and 2 leveldb backends. I have only changed >> the >> > parameter connection of the memory backend in our
Re: Memory-backend TTL
Sorry, I don't know the MIME type I used. I use the python api with default options. The doc says "string"? http://basho.github.io/riak-python-client/object.html#riak.riak_object.RiakObject.encoded_data 2014-10-20 15:43 GMT+02:00 Luke Bakken : > Lucas, > > Thanks for all the detailed information. This is not expected > behavior. What MIME type are you using for storing the long integer > data (64 binary bits, I assume)? > > I'd like to try and reproduce this. There have been issues with TTL > and max_memory but they should have been fixed for Riak 2.0. > -- > Luke Bakken > Engineer / CSE > lbak...@basho.com > > > On Mon, Oct 20, 2014 at 1:56 AM, Lucas Grijander > wrote: > > Hi Luke, > > > > Indeed, when removed the thousands of requests, the memory is stabilized. > > However the memory consumption is still very high: > > > > riak-admin status |grep memory > > memory_total : 18494760128 > > memory_processes : 145363184 > > memory_processes_used : 142886424 > > memory_system : 18349396944 > > memory_atom : 561761 > > memory_atom_used : 554496 > > memory_binary : 7108243240 > > memory_code : 13917820 > > memory_ets : 11200328880 > > > > I have test also with Riak 1.4.10 and the performance is the same. > > > > Is it normal that the "memory_ets" has more than 10GB when we have a > > "ring_size" of 16 and a max_memory_per_vnode = 250MB? > > > ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com