Hello,
I am writing a new application and I am testing it on a cluster with 4 Riak
nodes (16 GM RAM, 2 x i3 3.4GHz - 2 cores).
The application is tested with the expected load of 1000 requests/second,
90% of the requests cause a Riak read and write of a new key. The problem
is that the performa
> Can you attach the eleveldb portion of your app.config file?
> Configuration problems, especially max_open_files being too low, can
> often cause issues like this.
>
> If it isn't sensitive, the whole app.config and vm.args files are also
> often helpful.
Hello Evan,
thanks for responding.
I o
Hi Evan,
regarding the swappiness and disk scheduling: these were set to default, I will
correct it and run another test.
The hosting provider sets the computer with software RAID1 over 2 physical
disks, do you think it is useful with Riak?
BTW, I suspected that part of the problem could be c
You are right! Riak was killed by the oom killer on all nodes except the one I
was looking at.
Oct 14 18:34:01 gr-node03 kernel: [ pid ] uid tgid total_vm rss cpu
oom_adj oom_score_adj name
Oct 14 18:34:01 gr-node03 kernel: [31808] 106 31808 5178811 3884730 0
0 0
Hi Evan,
I corrected the setup according to your recommendations:
- vm.swappiness is 0
- fs is ext4 on software RAID1, mounted with noatime
- disk scheduler is set to deadline (it was the default)
- eleveldb max_open_files is set to 200, cache is set to default
(BTW, why is Riak not using the ne
> 1. Did you realize that the "log_jan.txt" file from #1 below documented a
hard disk failure?
I did not know about the corruption (I did not know that LevelDB logs are
human readable application logs), thanks for telling me. I did look at syslog
and I did not find any traces of a disk failure
-- Původní zpráva --
> Od: Matthew Von-Maszewski
> Datum: 22. 10. 2012
> Předmět: Re: Riak performance problems when LevelDB database grows beyond 16GB
> Jan,
>
> ...
> The next question from me is whether the drive / disk array problems are your
> only problem at this point. Th
It would be sufficient if I could have two eleveldb instances in
riak_kv_multi_backend and if I could drop one of them with
riak_kv_eleveldb_backend:drop/1 while the other instance is used. After another
interval I would drop the second instance etc.
Jan
-- Původní zpráva --
Od