Re: Running multiple Riak instances.

2010-07-30 Thread Dan Reverri
Hi Misha, Another possibility for starting multiple instances from the packaged builds is the following: 1. Create a folder to store the relevant files and data for "staging" mkdir -p ~/staging/etc 2. Create an app.config and vm.args file for "staging" ~/staging/etc/app.config [ {riak_core

Re: Write_lock error has occurred after inserting 12M data

2010-07-30 Thread David Smith
That's only a partial paste, correct? How many partitions ({ring_creation_size, 64} in your etc/app.config) do you have defined? There should be a write lock file open for each partition. Also, what is your ulimit -n set to? Thanks, D. On Fri, Jul 30, 2010 at 5:09 PM, Alex Wolfe wrote: > $ lso

Re: Write_lock error has occurred after inserting 12M data

2010-07-30 Thread Alex Wolfe
$ lsof -p 16129 | awk '{print $9}'| uniq -c | grep lock 1 /usr/local/Cellar/riak/0.12.0/libexec/data/bitcask/913438523331814323877303020447676887284957839360/bitcask.write.lock 1 /usr/local/Cellar/riak/0.12.0/libexec/data/bitcask/959110449498405040071168171470060731649205731328/bitcask.writ

Re: Write_lock error has occurred after inserting 12M data

2010-07-30 Thread David Smith
Yup, that looks like the file handle leak. You can verify by using lsof on the server and looking for multiple handles to bitcask.write.lock. Something like: lsof -p pid | awk '{print $9}'| uniq -c D. On Friday, July 30, 2010, Alex Wolfe wrote: > Hey David. > Does the below log output look like

Re: Write_lock error has occurred after inserting 12M data

2010-07-30 Thread Ken Matsumoto
Hi Grant, Alexander, David, Thank you for your messages. Here are the infos: Version : 0.11.0-1344 debian package Key length : 36B I append the error part of the log file: =ERROR REPORT 30-Jul-2010::04:00:54 === ** State machine <0.28753.1381> terminating ** Last event in was {put,<0.28729.

Re: Write_lock error has occurred after inserting 12M data

2010-07-30 Thread David Smith
Given the amount of data Ken has inserted, I'd say there's a 90% chance this problem is related to a bug I fixed earlier this week: http://bitbucket.org/basho/bitcask/changeset/6a74d3aac4fb But without more information, it's hard to say. I presume, Ken, you are also seeing a vnode crash error bef

Re: Write_lock error has occurred after inserting 12M data

2010-07-30 Thread Alexander Sicular
This may be another max file handler type of error. Or maybe even an oom thing if the key length is large. On Jul 30, 2010, at 4:59 PM, Grant Schofield wrote: > I am not sure if you hit an already fixed bug in Bitcask or not. What version > of Riak are you running on currently? > > Grant Scho

Re: Write_lock error has occurred after inserting 12M data

2010-07-30 Thread Grant Schofield
I am not sure if you hit an already fixed bug in Bitcask or not. What version of Riak are you running on currently? Grant Schofield Developer Advocate Basho Technologies On Jul 30, 2010, at 1:28 PM, Ken Matsumoto wrote: > Hi all, > > I just tried to insert 1Billion data records. > But I got th

Re: Protocol Buffer Timeouts

2010-07-30 Thread Dan Reverri
Hi Misha, Bug #537 has been opened for this issue: https://issues.basho.com/537 Thanks, Dan Daniel Reverri Developer Advocate Basho Technologies, Inc. d...@basho.com On Fri, Jul 30, 2010 at 11:56 AM, Dan Reverri wrote: > Hi Misha, > > Justin was

Riak Recap for 7/28 - 7/29

2010-07-30 Thread Mark Phillips
Afternoon, Evening, Morning to all, Here is a great recap to lead you into the weekend. Enjoy! Mark Community Manager Basho Technologies wiki.basho.com twitter.com/pharkmillups - Riak Recap for 7/28 - 7/29 1) For anyone hacking Riak with Ruby, linked associations hit the master branch o

Re: Protocol Buffer Timeouts

2010-07-30 Thread Dan Reverri
Hi Misha, Justin was kind enough to point me in the right direction regarding your question. You are referring to the default timeout applied to gen_server:call/2. This does appear to be a bug in the client; I will file a ticket in Bugzilla. Thanks, Dan Daniel Reverri Developer Advocate Basho Te

Write_lock error has occurred after inserting 12M data

2010-07-30 Thread Ken Matsumoto
Hi all, I just tried to insert 1Billion data records. But I got the "write_lock" error after 12Million data. What is the reason and how should I avoid this? I use bitcask (default) backend and no parameters changed in config file. 1 record is just 70B text data. Regards, Ken. -- Ken Matsumoto

Re: Protocol Buffer Timeouts

2010-07-30 Thread Dan Reverri
Hi Misha, Where in riakc_pb_socket do you see a 5 second timeout? Are you using the 0.2.0 or the latest tip? There is an issue in 0.2.0 where map reduce messages are not handled correctly which is fixed in the latest tip. Thanks, Dan Daniel Reverri Developer Advocate Basho Technologies, Inc. d..

Protocol Buffer Timeouts

2010-07-30 Thread Misha Gorodnitzky
Hello all, We're doing some load testing using Riak erlang protocol buffers interface and are running into timeouts. From the looks of it, the timeouts are coming from the gen_server:calls in riakc_pb_socket.erl which don't specify a timeout, meaning that they timeout after 5 seconds and we lose a

Re: python document storing?

2010-07-30 Thread Dan Reverri
Hi Bob, Sorry for not getting back to you sooner. To store binary data with the riak python client use the bucket.new_binary method. There are no functions that operate directly on files so you will have to read the file and pass the contents of the file as the data of the riak object. Here is an