After successfully getting and storing several objects in a loop, I eventually
get a timeout when storing a small object. I'm using the python client and
protocol buffers. Is the node corrupted somehow? I'm running a single node
cluster on Ubuntu 11.04 (natty), 32-bit machine.
Here's the trace
g,5
},{proc_lib,init_p_do_apply,3}]},{gen_server,call,[{riak_kv_vnode_master
,'riak@127.0.0.1'},{spawn,{riak_vnode_req_v1,188396695437186704299693747
96758002522554368,{server,undefined,undefined},{riak_core_fold_req_v
1,#Fun,<0.78.0>}}},infinity]}}}
init terminating in do_boot ()
-----
undefined,none,6}
** Reason for termination =
** {{badmatch,{error,emfile}},
[{bitcask_fileops,create_file_loop,3},
{bitcask,put,3},
{riak_kv_bitcask_backend,put,3},
{riak_kv_vnode,perform_put,3},
{riak_kv_vnode,do_put,7},
{riak_kv_vnode,handle_command,3},
{ria
gt; VM) is being exceeded. Try bumping up ERL_MAX_PORTS in vm.args.
>
> D.
>
> On Thu, Sep 29, 2011 at 10:52 PM, Jim Adler wrote:
>> Thanks Sean. I added the ulimit -n 10240 to /etc/default/riak, restarted
>> riak, but that didn't work.
>>
>> Fyodor Yaro
1707326989568713251046585937826284568576/1316995340.bitcask.data/var/lib/riak/bitcask/605153021707326989568713251046585937826284568576/1317493005.bitcask.data/var/lib/riak/bitcask/605153021707326989568713251046585937826284568576/1317495168.bitcask.data":
{{badmatch,{error,emfile}},[{bitcask,'-
Thanks David - I'll try that on my single-node instance, but I'm working
another Riak issue on another thread.
Jim
- Original Message -
From: "David Smith"
To: "jim adler"
Sent: Friday, October 7, 2011 7:02:01 AM
Subject: Re: Timeout when storing
I'm seeing the same behavior and logs on a bucket with about 8M keys. Fyodor,
any luck with any of Bryan's suggestions?
Jim
- Original Message -
From: "Bryan Fink"
To: "Fyodor Yarochkin"
Cc: riak-users@lists.basho.com
Sent: Friday, October 7, 2011 6:06:15 AM
Subject: Re: Riak 1.
I tried the Bitcask-to-LevelDB procedure on 1.0.1 to switch my backend from
Bitcask to LevelDB. I have one node on Bitcask with my data and the other node
with a fresh LevelDB backend. When I join the fresh LevelDB node, I get the
following error: Failed: r...@xx.xx.xx.xx has a different ring_c
tions/creation_size config?
Thanks for the help!
Jim
From: Dan Reverri
Date: Fri, 21 Oct 2011 09:41:02 -0700
To: Jim Adler
Cc: Ian Plosker , "riak-users@lists.basho.com"
Subject: Re: Riak 1.0
The "ring_creation_size" parameter in app.config must be set before the node
has b
Thanks Nico. I'll make the changes you suggest to regain the consistency.
My understanding is a ring size of 64 would limit my cluster to about 6
nodes, right?
Jim
From: Nico Meyer
Date: Sat, 22 Oct 2011 12:12:13 +0200
To: "riak-users@lists.basho.com" , Jim Adler
Subject
ange the ring_creation_size on all your nodes
> to 64 and restart them, if only to make thing consistent again.
>
> Cheers,
> Nico
>
> Am 22.10.2011 09:46, schrieb Jim Adler:
>>
>> For the bitcask backend, ring_num_partitions=64 and ring_creation_size=256.
>
I'm trying to run a very simplified key filter that's timing out. I've got
about 8M keys in a 3-node cluster, 15 GB memory, num_partitions=256, LevelDB
backend.
I'm thinking this should be pretty quick. What am I doing wrong?
Jim
Here's the query:
curl -v -d
'{"inputs":{"bucket":"nodes","key_
Ryan Caught
Date: Sun, 23 Oct 2011 14:52:48 -0400
To: Jim Adler
Cc: "riak-users@lists.basho.com"
Subject: Re: Key Filter Timeout
If you are doing just a simple equality check in the key filter, then why
not skip key filters and lookup the key directly? Key filters are not
perfor
at <0.452.0> exit with
reason {sink_died,killed} in context child_terminated
Ideas?
Jim
From: Kelly McLaughlin
Date: Sun, 23 Oct 2011 14:13:09 -0600
To: Jim Adler
Cc: "riak-users@lists.basho.com"
Subject: Re: Key Filter Timeout
Jim,
Looks like you are possibly using both th
aving a helluva time
getting these basic tasks accomplished before I ramp to hundreds of millions
of keys.
Thanks for any help.
Jim
From: Kelly McLaughlin
Date: Sun, 23 Oct 2011 14:13:09 -0600
To: Jim Adler
Cc: "riak-users@lists.basho.com"
Subject: Re: Key Filter Timeout
Jim,
Lo
Thanks Kelly. Much appreciated! I'll try your suggestions and get back.
Jim
From: Kelly McLaughlin
Date: Sun, 23 Oct 2011 22:02:52 -0600
To: Jim Adler
Cc: "riak-users@lists.basho.com"
Subject: Re: Key Filter Timeout
Jim,
A couple of things to note. First, bitcask sto
ies?
>
> I've done some informal testing on a 3 node 1.0 cluster and key listing was
> working just peachy on 60 million keys using bitcask as the backend.
>
> Cheers,
>
> Mark
>
> On Sunday 23 October 2011 12:26:35 Aphyr wrote:
>> On 10/23/2011 12:11 PM, Jim Adl
YMMV
>
>Cheers,
>
>Mark
>
>
>On Monday 24 October 2011 07:37:47 Jim Adler wrote:
>> Yes, using 1.0.1 with LevelDB. I moved to it from Bitcask in the hopes
>>of better performance.
>>
>> Good to hear about your 60M key use-case. Can you share any key a
I'm getting a crash when running a key filter query. It looks like the
problem is having a newline in a key name (see name-this\nBob in error
below). Is there any restriction on key characters?
Jim
2011-10-24 05:08:14 =SUPERVISOR REPORT
Supervisor: {local,riak_pipe_fitting_sup}
Con
rant key?
Jim
From: Jim Adler
Date: Mon, 24 Oct 2011 16:04:40 -0700
To: "riak-users@lists.basho.com"
Subject: Newline in Key Legal
I'm getting a crash when running a key filter query. It looks like the
problem is having a newline in a key name (see name-this\nBob in error
b
gt;On Tue, Oct 25, 2011 at 12:54 AM, Jim Adler wrote:
>> Ok, this is a little simpler but I'm not sure how to proceed. When I
>>pull
>> the keys via ?keys=stream API, I have a key "name-this\nBob".
>>But
>> fetching it gets me a 404 not found error. I
Hi Bryan - Thanks for the info but no dice. app.config http_url_encoding
is "on" but the key shows as not found.
Anything else I can try?
Jim
On 10/25/11 8:26 AM, "Bryan Fink" wrote:
>On Tue, Oct 25, 2011 at 10:35 AM, Jim Adler wrote:
>> Hi Bryan - I'm n
I'm consistently seeing a key filter query fail with protocol buffers that
works fine with http. I've seen similar problems reported
(http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-April/004
002.html,
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-October/00
60
Ried,
Riak 1.0.1 from package, Ubuntu 11.04
Jim
From: Reid Draper
Date: Sat, 29 Oct 2011 13:47:02 -0400
To: Jim Adler
Cc: "riak-users@lists.basho.com"
Subject: Re: Python Client Protocol Buffers Error
Jim,
What version of Riak are you using? Are you using a package or did
ey_filter.tokenize('-', 2) +
riak.key_filter.starts_with('jim'))
query.map('''
function(v) {
return [[v.key]];
}''')
for result in query.run(timeout=600):
print '%s' % (result)
Thanks!
Jim
From: Reid Draper
Date: Sat, 29 Oc
RE: large ring size warning in the release notes, is the performance
degradation linear below 256? That is, until the major release that fixes this,
is it best to keep ring sizes at 64 for best performance?
Jim
Sent from my phone. Please forgive the typos.
On Nov 4, 2011, at 7:20 PM, Jared Mo
ches
>the common 64 partition, 4 node Riak cluster. Thus, choosing a ring
>size based on that ratio and your expected number of future nodes is a
>reasonable choice. Just be sure to stay under 1024 until the issue
>with gossip overloading the cluster is resolved.
>
>-Joe
>
>On S
I'm running 6 nodes on EC2 with load balancing and instance storage. I'm
seeing an average of 115 ms per index.run() using a single _bin index,
protocol buffers, and python client. A regular get() is about 40 ms which
is acceptable for my application.
Has anyone done any tuning to get the 2i run
Speed. I'm running 6 nodes on EC2 with load balancing and instance storage.
I'm seeing an average of 115 ms per index.run() using a single _bin index,
protocol buffers, and python client. A regular get() is about 40 ms. How
can I get faster access with 2i?
Jim
From: Rusty Klophaus
Date: Wed
I'm getting the following error while using protocol buffers
(RiakPbcTransport) with more than one thread (stack trace below):
Socket returned short packet length 0 - expected 4'
I'm using the 1.3.0 Python client on Mac and Ubuntu 11.04 and have seen
the same error on both OS's. A single-
be shared.
>
>If you have multiple threads, just make sure to use multiple
>clients.
>
>Best Regards,
>
>Armon Dadgar
>
>>
>> Message: 1
>> Date: Sun, 26 Feb 2012 19:39:38 +
>> From: Jim Adler
>> To: "riak-users@lists.basho.com&
branch on GitHub. It has much better support for threads and
>> persistent connections. Just don't use variant client_id values and PB
>> connections (switch to Riak 1.1 and ignore client_id).
>>
>> One Client can be shared across threads, but not Objects. I don'
This is great Shuhao!
I've had problems using the python client using multiple threads with protocol
buffers. The exact same multithreaded code works fine with http protocol. So,
any attention you can give to the pbc transport would be hugely appreciated.
Good luck! Can't wait to give your new
33 matches
Mail list logo