On 16/05/13 15:38, Jared Morrow wrote:
I've considered packaging separate files for configuring the limit
for people, but the user in me always felt like that was something
the sysadmin should have a say in. I rather dislike packages that
make system changes without my knowledge or consent. Mayb
I've considered packaging separate files for configuring the limit for people,
but the user in me always felt like that was something the sysadmin should have
a say in. I rather dislike packages that make system changes without my
knowledge or consent. Maybe that is just me?
I do agree that th
On 16/05/13 14:39, Toby Corkindale wrote:
On 16/05/13 14:24, Jared Morrow wrote:
Well the riak-cs / riak / stanchion scripts all drop privileges using
sudo. On RHEL/Centos this sudo exec carries the settings from the
calling user (in the case of init.d, root) so things are fine there. On
Ubunt
On 16/05/13 14:24, Jared Morrow wrote:
Well the riak-cs / riak / stanchion scripts all drop privileges using
sudo. On RHEL/Centos this sudo exec carries the settings from the
calling user (in the case of init.d, root) so things are fine there. On
Ubuntu/Debian that does not always work. So if
Well the riak-cs / riak / stanchion scripts all drop privileges using sudo.
On RHEL/Centos this sudo exec carries the settings from the calling user
(in the case of init.d, root) so things are fine there. On Ubuntu/Debian
that does not always work. So if you set the ulimit for the root user, it
I added some debugging to the /etc/init.d/riak-cs script.
As far as it's concerned the ulimit has been successfully increased in
there, right before it calls start-stop-daemon.
Is it possible that part of the Debian infrastructure is dropping
privileges?
On 16/05/13 12:34, Toby Corkindale wr
Just wondering the same thing.
$ sudo su riakcs
$ ulimit -n
16384
$ sudo service riak-cs restart
WARNING: ulimit -n is 1024; 4096 is the recommended minimum.
I experience this issue only with Riak CS, not Riak itself.
Richard
On May 15, 2013, at 8:34 PM, Toby Corkindale
wrot
On 16/05/13 13:31, Jeremiah Peschka wrote:
If you check ulimit through Erlang [1], are you seeing the appropriate
ulimit values?
The /proc/$PID/limits method reports max open files=1024
I've only noticed this recently on some Debian Squeeze nodes I've
commissioned.. I just checked my Ubuntu P
If you check ulimit through Erlang [1], are you seeing the appropriate
ulimit values?
[1]:
http://riak.markmail.org/search/?q=ulimit#query:ulimit+page:2+mid:bqjbmn3yyh5hdvcb+state:results
---
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Develope
I'm confused -- I'm still seeing some warnings from Riak/RiakCS about
the ulimit being set too low, even though I *am* increasing it.
What am I doing wrong here?
# cat /etc/default/riak-cs
ulimit -n 32000
# ulimit -n 8192
# service riak-cs start
WARNING: ulimit -n is 1024; 4096 is t
Christian, all,
Not sure what kind of magic happend, but no server died in the last 2
days... and counting.
We have not changed a single line of code, which is quite odd...
I'm still monitoring everything and hope (sic!) for a failure soon so we
can fix the problem!
Thanks
--
*Got a blog? Mak
Currently, as the replication exists today I don't believe the replication
service would do exactly that. (anyone else on list, plz correct me if I'm
wrong here). However in the coming months we have that capability in the
road map.
However I'm just a little hesitant to suggest committing an enti
Thanks all for your help. To perform my workflow properly, the
`return_head` option was precisely what I needed. Now I can do something
like
X = Initial Object
{ok, X1} = riakc_pb_socket(Pid, X, [return_head]).
X2 = riakc_obj:update_value(X1, NextValue).
{ok, X3} = riakc_pb_socket(Pid, X2, [return
Hey riak-users,
Hot on the heels of RICON|East, I have just released riak-client 1.2.0 to
rubygems.org. This release fixes a few long-standing bugs (2i over PBC,
Excon incompatibility) and also adds support for the "clear bucket
properties" feature of Riak 1.3. Enjoy!
Release notes:
https://gith
Kurt,
I'm not sure about the cause of the MapReduce crash (I suspect it's running
out of resources of some kind, even with the increase of vm count and mem).
One word of advice about the list keys timeout, though:
Be sure to use streaming list keys.
In Python, this would look something like:
for
Jeremy -
As noted in the other replies, yes, you need to use 'return_body' to
get the new vector clock in order to avoid creating a sibling on a
subsequent write of the same key.
That said, you can supply the param 'return_head` in the proplist
along with `return_body` which will eliminate having
Thanks for the kind words, Jeremiah.
Jeremy, if you find anything that's wrong with that description of sibling
behavior, please let me know. It's always possible I missed something
important.
-John
On Wednesday, May 15, 2013, Jeremiah Peschka wrote:
> John Daily (@macintux) wrote a great blog
John Daily (@macintux) wrote a great blog post that covers sibling behavior
[1]
In short, though, because you're supplying an older vector clock, and you
have allow_mult turned on, Riak makes the decision that since a vector
clock is present that conflicts with what's already on disk a sibling
sho
Hi Kurt,
In order to be able to provide some feedback on why the mapreduce job might be
timing out and try to help you address this, I will need some additional
information:
- Which version of Riak are you running on?
- What does your app.config file look like?
- What does you data look like?
-
Hi People
Im running Map Reduce on a bucket with more than 100 000 items.
The MR runs for 10 seconds then stops with this error in the logs:
*@riak_pipe_vnode:new_worker:766 Pipe worker startup failed:fitting was
gone before startup*
*And this errror in the Python shell:*
Error running MapReduce
20 matches
Mail list logo