Hi,
I am exploring Riak cluster of 4 nodes. All nodes are valid nodes, I
could query data from buckets from all nodes locally.
However, following 'member_status' is not clear to me.
My queries are:
1. I have very less data in this ring, and the data is already
replicated to
Hi Douglas,
I think you're problem here is that you're front-loading requests.
You're better off to issue them serially and wait for one to finish
before issuing the next. Rumor has there's an example of how one might
do this somewhere in Riaktant [1], a somewhat dusty but useful sample
app. I'll
John,
Glad things are starting to run smoothly. This pam.d setting has tripped me a
couple of times.
Best,
Sean
On Wednesday, August 1, 2012 at 8:16 PM, John Roy wrote:
> All --
>
> The pam.d/su and limits.conf changes seem to have brought us back to
> reliability -- so far so good. T
All --
The pam.d/su and limits.conf changes seem to have brought us back to
reliability -- so far so good. The time consuming part was the reboot. I
double checked the ulimit in the risk console and all came up to 8192 -- my new
limit.
thanks for all your help,
John
On Aug 1, 2012, at 2:5
John,
Please make sure in /etc/pam.d/su, that the following line is uncommented:
sessionrequired pam_limits.so
I have noticed lately in Ubuntu that this line is commented out by default.
Best,
Sean
On Wednesday, August 1, 2012 at 5:47 PM, Jared Morrow wrote:
> You will need to m
You will need to make the adjustments in the /etc/security/limits.conf file as
described here http://wiki.basho.com/Open-Files-Limit.html
-Jared
On Aug 1, 2012, at 3:33 PM, John Roy wrote:
> Hi Reid --
>
> I added a risk.conf file in /etc/default with the line:
>
> ulimit -n 8192
>
> then
I'm not an ubuntu expert, but it's clear the the ulimit is not getting
correctly set
for the "riak" user, as shown from the console output (it reads 1024). In ubuntu
12.04 I remember I had to edit both /etc/pam.d/su and
/etc/security/limits.conf. I followed
instructions here [1]. Be sure to do th
Hi Reid --
I added a risk.conf file in /etc/default with the line:
ulimit -n 8192
then rebooted, restarted risk and then did the attach.
I got this line (which is also in the crash.log), then the limit of 1024. See
below:
16:28:58.041 [error] Hintfile
'/disk1/riak/bitcask/15985174158306750
We don't have max_open_files set in the app.config, so whatever the default is
that's what we have.
so 63 vnodes, 3 nodes, and assuming max_open_files = 20, --> 420.
On Aug 1, 2012, at 1:54 PM, Dietrich Featherston wrote:
> What is max_open_files set to in the eleveldb section of app.config?
ulimit of 4096 might be too low. I'd also double-check the ulimit
has taken effect either by attaching to the node (riak attach) or
starting the node in the console (riak console), then type this:
os:cmd("ulimit -n").
Be sure to include the period (.) that
is above as well.
Reid
On Aug 1, 2012
Hi --
Riak 1.1.1
three nodes
Ubuntu 10.04.1 LTS
downtime means one node drops off then the other two follow so the entire
cluster falls down.
On Aug 1, 2012, at 1:48 PM, Mark Phillips wrote:
> Hey John,
>
> First questions would be:
>
> * What version of Riak?
> * How many nodes?
> * Which O
ulimit is 4096.
Here's the limit in sysctl:
usr/sbin# sysctl fs.file-max
fs.file-max = 2413423
On Aug 1, 2012, at 1:49 PM, Vlad Gorodetsky wrote:
> What's your ulimit?
> You should try doing something like "sudo ulimit -n 1024" or go with
> permanent solution via sysctl, as far as I remember.
>
What is max_open_files set to in the eleveldb section of app.config? If
unspecified I think the limit is 20. Remember that this number is per
vnode. The process limit specified by ulimit -n must be greater than
max_open_files * num_vnodes / num_nodes allowing room for vnode
multiplexing and fallbac
What's your ulimit?
You should try doing something like "sudo ulimit -n 1024" or go with
permanent solution via sysctl, as far as I remember.
/Vlad
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users
Hey John,
First questions would be:
* What version of Riak?
* How many nodes?
* Which OS?
* When you say "downtime" do you mean the entire cluster? Or just a subset
of your nodes?
Mark
On Wed, Aug 1, 2012 at 1:42 PM, John Roy wrote:
> I'm seeing significant downtime on Riak now. Much like th
I'm seeing significant downtime on Riak now. Much like the "Riak Crashing
Constantly" thread. However in this case we get a "Too many open files" error,
and also "contains pointer that is greater than the total data size." See the
error messages below for more details.
If others have an idea
Hi, riak users
I have object with two different secondary indexes - status and type. For
example, I would like to query all keys with status=new AND type=gloves.
As wiki says "In version 1.0 of Riak, index queries are only supported on one
index field at a time."
(http://wiki.basho.com/Seconda
17 matches
Mail list logo