Re: Write_lock error has occurred after inserting 12M data

2010-08-02 Thread David Smith
On Sun, Aug 1, 2010 at 11:40 AM, Alex Wolfe wrote: > IIRC, that was a full paste of all the bitcask.write.locks. Riak fails > pretty much immediately while running my test suite, maybe before a lock is > opened for each partition? > If that was a full paste, yes, you weren't even getting the wh

Re: Write_lock error has occurred after inserting 12M data

2010-08-01 Thread Alex Wolfe
IIRC, that was a full paste of all the bitcask.write.locks. Riak fails pretty much immediately while running my test suite, maybe before a lock is opened for each partition? My ulimit was set to 256, which is obviously no good. After boosting it to 9000 and running my test suite, I have the

Re: Write_lock error has occurred after inserting 12M data

2010-07-30 Thread David Smith
That's only a partial paste, correct? How many partitions ({ring_creation_size, 64} in your etc/app.config) do you have defined? There should be a write lock file open for each partition. Also, what is your ulimit -n set to? Thanks, D. On Fri, Jul 30, 2010 at 5:09 PM, Alex Wolfe wrote: > $ lso

Re: Write_lock error has occurred after inserting 12M data

2010-07-30 Thread Alex Wolfe
$ lsof -p 16129 | awk '{print $9}'| uniq -c | grep lock 1 /usr/local/Cellar/riak/0.12.0/libexec/data/bitcask/913438523331814323877303020447676887284957839360/bitcask.write.lock 1 /usr/local/Cellar/riak/0.12.0/libexec/data/bitcask/959110449498405040071168171470060731649205731328/bitcask.writ

Re: Write_lock error has occurred after inserting 12M data

2010-07-30 Thread David Smith
Yup, that looks like the file handle leak. You can verify by using lsof on the server and looking for multiple handles to bitcask.write.lock. Something like: lsof -p pid | awk '{print $9}'| uniq -c D. On Friday, July 30, 2010, Alex Wolfe wrote: > Hey David. > Does the below log output look like

Re: Write_lock error has occurred after inserting 12M data

2010-07-30 Thread Ken Matsumoto
Hi Grant, Alexander, David, Thank you for your messages. Here are the infos: Version : 0.11.0-1344 debian package Key length : 36B I append the error part of the log file: =ERROR REPORT 30-Jul-2010::04:00:54 === ** State machine <0.28753.1381> terminating ** Last event in was {put,<0.28729.

Re: Write_lock error has occurred after inserting 12M data

2010-07-30 Thread David Smith
Given the amount of data Ken has inserted, I'd say there's a 90% chance this problem is related to a bug I fixed earlier this week: http://bitbucket.org/basho/bitcask/changeset/6a74d3aac4fb But without more information, it's hard to say. I presume, Ken, you are also seeing a vnode crash error bef

Re: Write_lock error has occurred after inserting 12M data

2010-07-30 Thread Alexander Sicular
This may be another max file handler type of error. Or maybe even an oom thing if the key length is large. On Jul 30, 2010, at 4:59 PM, Grant Schofield wrote: > I am not sure if you hit an already fixed bug in Bitcask or not. What version > of Riak are you running on currently? > > Grant Scho

Re: Write_lock error has occurred after inserting 12M data

2010-07-30 Thread Grant Schofield
I am not sure if you hit an already fixed bug in Bitcask or not. What version of Riak are you running on currently? Grant Schofield Developer Advocate Basho Technologies On Jul 30, 2010, at 1:28 PM, Ken Matsumoto wrote: > Hi all, > > I just tried to insert 1Billion data records. > But I got th

Write_lock error has occurred after inserting 12M data

2010-07-30 Thread Ken Matsumoto
Hi all, I just tried to insert 1Billion data records. But I got the "write_lock" error after 12Million data. What is the reason and how should I avoid this? I use bitcask (default) backend and no parameters changed in config file. 1 record is just 70B text data. Regards, Ken. -- Ken Matsumoto