Thanks for the information. Yes, one RiakClient instance per Unix
process is correct.
I will see if there is a way for you to keep track of connections from
the client to Riak. Off the top of my head the Python client doesn't
have the ability to set limits.
--
Luke Bakken
Engineer
lbak...@basho.c
Hi Luke,
Yes I am creating new client objects for each of my tasks.
Please see this github issuse against the python client for some
background as to why.
https://github.com/basho/riak-python-client/issues/497
Basicaly I ran into issues with concurrency when processes are forked.
I might exper
Hi Steven,
At this point I suspect you're using the Python client in such a way
that too many connections are being created. Are you re-using the
RiakClient object or repeatedly creating new ones? Can you provide any
code that reproduces your issue?
--
Luke Bakken
Engineer
lbak...@basho.com
On
Hi Luke,
Here's the output of
$ sysctl fs.file-max
fs.file-max = 2500
Regards
Steven
On Wed, Feb 1, 2017 at 9:30 AM Luke Bakken wrote:
> Hi Steven,
>
> What is the output of this command on your systems?
>
> $ sysctl fs.file-max
>
> Mine is:
>
> fs.file-max = 1620211
>
> --
> Luke Bakke
Hi Steven,
What is the output of this command on your systems?
$ sysctl fs.file-max
Mine is:
fs.file-max = 1620211
--
Luke Bakken
Engineer
lbak...@basho.com
On Tue, Jan 31, 2017 at 12:22 PM, Steven Joseph wrote:
> Hi Shaun,
>
> Im having this issue again, this time I have captured the syste
Hi Shaun,
Im having this issue again, this time I have captured the system limits,
while riak is still crashing.
Please note lsof and prlimit outputs at bottom.
steven@hawk5:log/riak:ยป tail error.log
Hi Shaun,
I have already set this to a very high value
(r...@hawk1.streethawk.com)1> os:cmd("ulimit -n").
"2500\n"
(r...@hawk1.streethawk.com)2>
So the issue is not that the limit is low, but maybe a resource leak ? As I
mentioned our application processes continuously run queries on the cl
I've had this issue again, this time I checked the output of lsof and it
seems like its the number of established connections are way high, I've
configured my application tasks to exit and cleanup connections
periodicaly. That should solve it.
Thanks guys.
Steven
On Fri, Jan 27, 2017 at 3:07 AM
FYI: this is the function that is crashing:
get_uint32_measurement(Request, #internal{os_type = {unix, linux}}) ->
{ok,F} = file:open("/proc/loadavg",[read,raw]), %% <---
crash line
{ok,D} = file:read(F,24),
ok = file:close(F),
{ok,[Load1,Load5,Load15,_PRun,PTota
Steven,
You may be able to get information via the lsof command as to what
process(es) are using many file handles (if that is the cause).
I searched for that particular error and found this GH issue:
https://github.com/emqtt/emqttd/issues/426
Which directed me to this page:
https://github.com/e
Hi Shaun,
I have already set this to a very high value
(r...@hawk1.streethawk.com)1> os:cmd("ulimit -n").
"2500\n"
(r...@hawk1.streethawk.com)2>
So the issue is not that the limit is low, but maybe a resource leak ? As I
mentioned our application processes continuously run queries on the cl
Hi Steven,
Based on that log output, it looks like you're running into issues with
system limits, probably open file limits. You can check the value that
Riak has available by connecting to one of the nodes with riak attach, then
executing:
```
os:cmd("ulimit -n").
```
(After, disconnect with c
12 matches
Mail list logo