If you're limited to R16, you can check out one of the 2.0.0 beta
releases. Please also note that due to VM bugs, we don't recommend
using any R16 release lower than R16B02.
On Thu, Jun 26, 2014 at 11:30 AM, darkchanter wrote:
> Thank You
>
> Unfortunately "openSuSE" is not available on
> http:/
No. A fair amount of work went into R16 compatibility, and it was
never ported to the 1.4 branch. A set of patches exists, but I have
no idea if it would apply cleanly. Best to just use a packaged
version of the application, which will come with its own version of
erlang.
On Thu, Jun 26, 2014 a
00
> net.ipv4.tcp_slow_start_after_idle = 0
> net.ipv4.tcp_tw_reuse = 1
>
> Chris
>
> On Wed, Jun 18, 2014 at 7:32 PM, Evan Vigil-McClanahan
> wrote:
>> Hi Earl,
>>
>> There are some known internode bottlenecks in riak 1.4.x. We've
>> addressed
Hi Earl,
There are some known internode bottlenecks in riak 1.4.x. We've
addressed some of them in 2.0, but others likely remain. If you're
willing to run some code at the console, running the following at the
console (from `riak attach`) should tell you whether or not the 2.0
changes are likely
Hi Alain,
You're likely running an erlang version that's too early. +sbt was
added earlier than +stbt. They're otherwise equivalent, so using +sbt
should be OK.
On Tue, Jun 17, 2014 at 4:16 PM, Alain Rodriguez wrote:
> Hi all,
>
> I am trying to set the Erlang scheduler bind type to nnts by ad
tion. I would have
> to implement only the reuse mechanism.
>
> On May 13, 2014 12:25 PM, "Evan Vigil-McClanahan"
> wrote:
>>
>> 1. The keys are meant to be unique, so there is a very, very, very low
>> probability of their reuse.
>> 2. No, sorry.
>
1. The keys are meant to be unique, so there is a very, very, very low
probability of their reuse.
2. No, sorry.
It's best to generate your own keys in any case, reading the code it
looks like the auto-generated keys are only available via HTTP.
On Tue, May 13, 2014 at 9:53 AM, Venkatachalam Subr
f buckets or
> is it just for the first gossip period?
>
>
> On 1 May 2014 01:12, Evan Vigil-McClanahan wrote:
>>
>> I suspect that, given the large heap messages, you're seeing the known
>> issues where when a custom bucket is created and the moving that
>>
e bucket was already created.
> It also does't happen if I don't call SetBucket at all (so using default
> backend and options).
> And it seems it doesn't happen if I call SetBucket but don't set `backend`
> property.
>
>
>
> On 1 May 2014 00:51, Evan Vigil
Hi Michael,
the issue there is that the version that we're shipping with in our
official packages is R16B02+patches. It's likely that you've built
with R15B01. The solution here is to use our packages
(http://docs.basho.com/riak/2.0.0pre20/downloads/), or to rebuild
R16B02 or R16B03-1 (we haven'
t the request id:
https://github.com/basho/riak-erlang-client/blob/master/src/riakc_pb_socket.erl#L490-L494
On Fri, Mar 21, 2014 at 9:29 PM, Evan Vigil-McClanahan
wrote:
> You don't want to recurse when you get the {ReqID, done} message, you
> should just stop there.
>
> On Fri, M
You don't want to recurse when you get the {ReqID, done} message, you
should just stop there.
On Fri, Mar 21, 2014 at 6:20 PM, István wrote:
> With help of Evan (evanmcc) on the IRC channel I was able to kick off
> the clean up job using riak-erlang-client.
>
> Here is the code:
>
> https://gist.
iak-users [mailto:riak-users-boun...@lists.basho.com] On Behalf Of
> Edgar Veiga
> Sent: Tuesday, March 18, 2014 5:20 PM
> To: Evan Vigil-McClanahan
> Cc: riak-users@lists.basho.com
> Subject: Re: Riak node down after ssd failure
>
>
>
> Thanks Evan!
>
>
>
>
Probably the easiest thing would be to wait until the new machine is
ready to join, add it with a nodename distinct from that of the last
one, and force-replace the new node for the old, dead one.
On Tue, Mar 18, 2014 at 1:03 PM, Edgar Veiga wrote:
> Hello all!
>
> I have a 6 machine cluster with
un, Mar 9, 2014 at 10:32 PM, yaochitc wrote:
>
> 2014-3-10 下午1:08于 "Evan Vigil-McClanahan" 写道:
>
>
>>
>> Please try the int_to_bin_bigendian generator.
>>
>> On Sun, Mar 9, 2014 at 8:48 PM, yaochitc wrote:
>> > Hello, I'm trying to do s
Please try the int_to_bin_bigendian generator.
On Sun, Mar 9, 2014 at 8:48 PM, yaochitc wrote:
> Hello, I'm trying to do some test with basho bench, using the
> basho_bench_driver_riakc_pb. It seems that a number string key is
> unsupported under this driver because all threads crashed when gener
Other tweaks:
- If you're running on machines with multiple numa zones, +sbt db can help.
- the 2.0 pre-releases have some changes to the networking code that
can help increase, if you're willing to try out an early release.
On Fri, Mar 7, 2014 at 3:06 PM, Christian Dahlqvist wrote:
> As the Pro
Looking over the changes from 1.4.7 -> 1.4.8 I don't see any changes
that would be likely to affect merges.
It it possible that it's been happening for longer?
Can you give a rough description of your workload so I can make a
start on reproducing the issue tomorrow?
On Wed, Feb 26, 2014 at 7:52
Hi Toby, this is a recently identified regression in 1.4.7.
Go into your bitcask data directories, and you'll likely see a number
of 0 byte data files and 18 byte hintfiles with the same file ID
number. Stop the node, move those aside and you should be back in
business. find can be used to auto
Hi Guido,
That warning is generic, so if you've found that limiting online
schedulers helps your performance, feel free to ignore it.
I've found that for dual socket systems (with an 8 core machine is
unlikely to be) that +sbt nnts generally helps performance more than
changing +S or +SP, but ymm
AAE in 2.0 will have IO rate limiting to keep it from overwhelming disks.
On Mon, Nov 11, 2013 at 1:33 PM, Alexander Sicular wrote:
> I'm interested to see how 2.0 fixes this. I too have been bit by the AAE
> killing servers problem and have had to turn it off (which is thankfully the
> easiest
>>}}
(perfdev@127.0.0.1)20> C:put(riak_object:new(<<"test">>, <<"b">>, <<"c">>)).
ok
(perfdev@127.0.0.1)21> C:get_index(<<"test">>, Query).
{ok,[<<"b">>]}
This should get you a lis
rns strings; is there a way to
> fetch them as bytes? Maybe that would work better; I'm wondering if the
> client is attempting to convert the bytes into unicode strings and dropping
> invalid characters?
>
>
> On 05/11/13 03:44, Evan Vigil-McClanahan wrote:
>>
>> Hi Tob
Hi Toby.
It's possible, since they're stored separately, that the objects were
deleted but the indices were left in place because of some error (e.g.
the operation failed for some reason between the object removal and
the index removal). One of the things on the feature list for the
next release
+A changes the size of the erlang asynchronous IO thread pool. Since
eleveldb doesn't use that thread pool, you can tune it down, but since
it doesn't take up a lot of resources, there is generally no reason to
do so.
On Fri, Nov 1, 2013 at 11:48 AM, kzhang wrote:
> Thanks!
>
> Could you elabora
You should probably leave it at its default value. On modern systems
there is no reason to change that value.
On Fri, Nov 1, 2013 at 11:02 AM, kzhang wrote:
> Does anyone have any insight?
>
>
>
> --
> View this message in context:
> http://riak-users.197444.n3.nabble.com/tuning-number-of-async
I replied to this (accidentally) off-list. For the rest of you folks,
if you want to follow the resolution you can add yourself to the
following issue:
https://github.com/basho/bitcask/issues/99
In the mean time, a workaround is to remove all lockfiles
(bitcask.*.lock) from your bitcask director
riak-admin vnode-status can be used to get information about the
number of bitcask files, their fragmentation and dead bytes, but since
it uses a lot of blocking vnode commands, it can spike latencies, so
should only be used off-peak.
On Mon, Sep 16, 2013 at 7:36 AM, Alex Moore wrote:
> Hi Charl,
Riak is no longer built or tested on 32 bit machines, so that could
potentially be a problem.
This document has some recommendations on tunings for linux that can
affect the OOM killer and VM system, most notably the vm.swappiness
sysctl:
http://docs.basho.com/riak/latest/cookbooks/Linux-Performan
It does make sense, but it isn't an ideal use-case for riak. Eventual
consistency means that existence checking under partition is always
going to be a bit fraught.
On Tue, Sep 10, 2013 at 2:03 PM, Vincenzo Vitale
wrote:
> Suppose I want to just store keys in a bucket without any body, this make
expiry_secs requires a node restart to take effect. I've done some
work toward lifting that restriction, but it's a bit of a pain and I
am not sure when that work will be finished.
On Wed, Sep 4, 2013 at 9:13 AM, Justin Shoffstall wrote:
> Alexander,
>
>> * How I can ensure that merge process ha
IIRC cs stores ~3x1mb entries for each 1mb of object that you store at
default n_val. so, 300k * 3 * 9 = 810k, * 150b (to be conservative) = 1.1
GB. There will be some other static and per-entry overheads, but you
should be able to do better than that before you OOM, even on a 1 node
system.
Mi
Glad to hear it! You might want to experiment with an even lower file
size, as you'll get better performance if more memory is is available
for the page cache, and for merges (which can take up extra memory
while they're happening).
On Thu, Aug 15, 2013 at 10:21 AM, Drew Goya wrote:
> Hey Evan,
I won't have time to look at these logs in more detail until later,
but are you sure you're not getting OOM killed? Usually there is some
sign of what happened when a node went down in console.log as well,
but it just sort of ends. Check the syslog for OS messages, you
might find something there
your patches the memory overhead will
> decrease by 22 (=16+4+2) bytes, am I right?
>
>
> On 5 August 2013 16:38, Evan Vigil-McClanahan wrote:
>>
>> Before I'd done the research, I too thought that the overheads were a
>> much lower, near to what the calculator said
Given your leveldb settings, I think that compaction is an unlikely
culprit. But check this out:
2013-08-05 18:01:15.878 [info] <0.83.0>@riak_core_sysmon_
handler:handle_event:92 monitor large_heap <0.14832.557>
[{initial_call,{riak_kv_get_fsm,init,1}},{almost_current_function,{riak_object,encode
s of
> items) so we were looking for the ways to reduce memory consumption.
> Here and here is stated a value of 40 bytes. 22 bytes in ram calculator
> seemed like a mistake because the following example obviously uses a value
> of 40.
>
> Anyway, thanks for your response.
>
&g
Some responses inline.
On Fri, Aug 2, 2013 at 3:11 AM, Alexander Ilyin wrote:
> Hi,
>
> I have a few questions about Riak memory usage.
> We're using Riak 1.3.1 on a 3 node cluster. According to bitcask capacity
> calculator
> (http://docs.basho.com/riak/1.3.1/references/appendices/Bitcask-Capaci
I also updated the issue to explain some of the other stats.
On Fri, Jun 7, 2013 at 7:15 AM, Shane McEwan wrote:
> On 07/06/13 14:22, Brian Shumate wrote:
>>
>> Thanks for the feedback! I've added an issue[0] to our basho_docs
>> repository[1] to get this information into the documentation.
>
>
>
https://github.com/basho/riak_core/pull/312
I had some time on the train today and put in a patch for this. Let
me know what you think of the format.
The following is an example... If it doesn't display fixed width,
take a look on the PR, which has it correctly formatted.
riak-admin transfer-limit 0
riak-admin transfer-limit N
Where N is the previous value.
Please note that this will kill and restart all handoffs on the
cluster. If this is an issue, you can do:
riak-admin transfer-limit 0
To only bounce the handoffs on that node.
On Fri, May 3, 2013 at 6:15 AM
The most common issue with large MapReduce jobs is one of the nodes
starting to swap (typically the node the request was made on).
Streaming results and pre-reduction[0] (where applicable) can often
lower the memory overhead of running MapReduce jobs on large numbers
of objects.
That would be the
hen
>>> you're on the same machine.
>>>
>>> Perhaps Sean could comment?
>>>
>>> Shuhao
>>> Sent from my phone.
>>>
>>> On 2013-04-10 4:04 PM, "Jeff Peck" wrote:
>>>>
>>>>
>>>&g
evious way I did it uses MapReduce) and I had better results. It finished
>> in 3.5 minutes, but nowhere close to the 15 seconds from the straight http
>> query:
>>
>> import riak
>> from pprint import pprint
>>
>> bucket_name = "mybucket"
>>
get_index() is the right function there, I think.
On Wed, Apr 10, 2013 at 2:53 PM, Jeff Peck wrote:
> I can grab over 900,000 keys from an indexs, using an http query in about 15
> seconds, whereas the same operation in python times out after 5 minutes. Does
> this indicate that I am using the
Riak has a virtual node on physical node structure. So all of your
virtual nodes are running on a single machine. If you were to add
more nodes, some of them would migrate to those new nodes,
distributing storage and load around the cluster.
On Wed, Apr 10, 2013 at 12:36 PM, Ben McCann wrote:
Hi Marco,
Sorry about the delay on this one. Not sure that the github issue is a
good place to work out a lot of debugging stuff, so brought it back
here.
When you say 'crash', do you mean crash in the erlang sense (OS
process issues a lot of logging, but does not stop) or in the OS sense
(OS pro
It seems to be really big
> and has a lot of siblings (>100) which sums up to 2GB.
>
> Ingo
>
>
> Am 04.04.2013 17:51, schrieb Evan Vigil-McClanahan:
>>
>> Possible, but would need more information to make a guess. I'd keep a
>>
>> close eye on that
m.args.
>
> The only other messages were about a lot of handoff going on.
>
> Maybe the node was getting some data concerning the 2GB object?
>
> Ingo
>
> Am 04.04.2013 17:25, schrieb Evan Vigil-McClanahan:
>
>> Major error on my part here!
>>
>>> your v
think of that would make +P help with OOM
> issues.
>
> On Thu, Apr 4, 2013 at 9:21 AM, Ingo Rockel
> wrote:
>> A grep for "too many processes" didn't reveal anything. The process got
>> killed by the oom-killer.
>>
>> Am 04.04.2013 16:12, schrieb Eva
As of 1.3 the old client:mapreduce is deprecated, please use
`riak_kv_mrc_pipe:mapred` instead.
On Thu, Apr 4, 2013 at 9:07 AM, Tom Zeng wrote:
> Hi everyone,
>
> I am trying to run the Erlang m/r following the Riak Handbook, and got the
> following error:
>
> (riak@127.0.0.1)4> ExtractTweet = fu
I can't speak to the costing issues, as that isn't something I am
terribly familiar with, but at the moment, riak still has some
overhead issues with very small values. There are upcoming
optimizations in the next major (1.4) release that should help. What
issues did you run into?
On Thu, Apr 4
uot; didn't reveal anything. The process got
> killed by the oom-killer.
>
> Am 04.04.2013 16:12, schrieb Evan Vigil-McClanahan:
>
>> That's odd. It was getting killed by the OOM killer, or crashing
>> because it couldn't allocate more memory? That's suggest
m the same
> three nodes. But everything looks fine.
>
> Ingo
>
> Am 03.04.2013 18:42, schrieb Evan Vigil-McClanahan:
>
>> Another engineer mentions that you posted your eleveldb section and I
>> totally missed it:
>>
>> The eleveldb section:
>>
]},
> {<<"memory_backend">>, riak_kv_memory_backend, [
> {max_memory, 1024}
> ]}
> ]},
>
> We are running a 6 node cluster and the ring size is 128.
>
> Best Regards and thanks for your help on this.
>
> -giri
>
22 PM, Godefroy de Compreignac
> wrote:
>>
>> Thanks Evan, it helped me a lot for my cluster!
>>
>> Godefroy
>>
>>
>>
>> 2013/3/29 Evan Vigil-McClanahan
>>>
>>> That's an interesting result. Once it's fully rebalanced,
mory. I'd remove the tunings for the caches
and buffers and drop max open files to 500, perhaps. Make sure that
you've followed everything in:
http://docs.basho.com/riak/latest/cookbooks/Linux-Performance-Tuning/,
etc.
On Wed, Apr 3, 2013 at 9:33 AM, Evan Vigil-McClanahan
wrote:
> Again,
y?
>
> Ingo
>
> Am 03.04.2013 17:57, schrieb Evan Vigil-McClanahan:
>
>> As for +P it's been raised in R16 (which is on the current man page)
>> on R15 it's only 32k.
>>
>> The behavior that you're describing does sound like a very large
>> o
;
> The outgoing/incoming traffic constantly is around 100 Mbit, if the
> peformance drops happen, we suddenly see spikes up to 1GBit. And these
> spikes constantly happen on three nodes as long as the performance drop
> exists.
>
> Ingo
>
> Am 03.04.2013 17:12, schrieb Evan Vigil-
]},
>
> the ring size is 1024 and the machines have 48GB of memory. Concerning the
> params from vm.args:
>
> -env ERL_MAX_PORTS 4096
> -env ERL_MAX_ETS_TABLES 8192
>
> +P isn't set
>
> Ingo
>
> Am 03.04.2013 16:53, schrieb Evan Vigil-McClanahan:
>
>>
056 [error] <0.9457.323>@riak_kv_console:status:178
> Status failed error:terminated
>
> Ingo
>
> Am 03.04.2013 16:24, schrieb Evan Vigil-McClanahan:
>
>> Resending to the list:
>>
>> Ingo,
>>
>> That is an indication that the protocol buffers serve
At the moment, the master branch of the python client is in a state of
flux leading up to another release. the 1.5-stable branch is more
likely what you want for the time being, or you can install from pypy
via pip or easy_install.
On Wed, Apr 3, 2013 at 6:02 AM, H. Ibrahim YILMAZ wrote:
> Hi,
Resending to the list:
Ingo,
That is an indication that the protocol buffers server can't spawn a
put fsm, which means that a put cannot be done for some reason or
another. Are there any other messages that appear around this time
that might indicate why?
On Wed, Apr 3, 2013 at 12:09 AM
ady
>
> - Original Message -
> From: "Dave Brady"
> To: "Evan Vigil-McClanahan"
> Cc: riak-users@lists.basho.com
> Sent: Monday, April 1, 2013 11:15:47 AM GMT +01:00 Amsterdam / Berlin / Bern
> / Rome / Stockholm / Vienna
> Subject: Re: Having
Dave,
If you're seeing the process count go that high, it suggests to me
that something else is wrong. Typically, even for heavily loaded
clusters, hundreds of thousands of processes isn't normal. Is there
anything else in the logs?
When a node sees this sort of behavior start, does riak-admin
;
>
> On Thu, Mar 28, 2013 at 6:50 PM, Giri Iyengar
> wrote:
>>
>> Evan,
>>
>> This has been happening for a while now (about 3.5 weeks now), even prior
>> to our upgrade to 1.3.
>>
>> -giri
>>
>> On Thu, Mar 28, 2013 at 6:36 PM, Evan Vig
first complete build of the index trees happens for the
> cluster to start rebalancing itself.
> Could that be the case?
>
> -giri
>
>
> On Thu, Mar 28, 2013 at 5:49 PM, Evan Vigil-McClanahan
> wrote:
>>
>> Giri,
>>
>> if all of the nodes are using i
> thing to note though -- I remember this problem starting roughly around
>> > the
>> > time I migrated a bucket from being backed by leveldb to being backed by
>> > memory. I did this by setting the bucket properties via curl and let
>> > Riak do
>> >
g the bucket properties via curl and let Riak do
> the migration of the objects in that bucket. Would that cause such issues?
>
> Thanks for your help.
>
> -giri
>
>
> On Thu, Mar 28, 2013 at 4:55 PM, Evan Vigil-McClanahan
> wrote:
>>
>> Giri, I've seen s
Giri, I've seen similar issues in the past when someone was adjusting
their ttl setting on the memory backend. Because one memory backend
has it and the other does not, it fails on handoff. The solution
then was to make sure that all memory backend settings are the same
and then do a rolling res
ything seems to be
> working great!
>
> Godefroy
>
>
> 2013/3/22 Evan Vigil-McClanahan
>>
>> OK, so there's a lot going on there.
>>
>> > 2013-03-22 12:02:18.719 [error] <0.16959.2526> gen_server <0.16959.2526>
>> > terminated wit
il-McClanahan
>>
>> You could look at the output of `riak-admin status` for those nodes
>> and see if one of them isn't taking gets and puts.
>>
>> On Wed, Mar 20, 2013 at 9:58 AM, Christian Steinmann
>> wrote:
>> > No summary.csv shows error for
OK, so there's a lot going on there.
> 2013-03-22 12:02:18.719 [error] <0.16959.2526> gen_server <0.16959.2526>
> terminated with reason: no function clause matching
> riak_core_pb:encode({ts,{1363,205559,674898}},
> {{ts,{1363,205559,674898}},<<131,104,7,100,0,8,114,95,111,98,106,101,99,116,109,0
ore_vnode,init,1}},{almost_current_function,{gen_fsm,loop,7}},{message_queue_len,0}]
> {#Port<0.6629407>,'riak@5.39.74.55'}
>
> I guess I have a problem with my network config...
>
> I precise that the servers hosting my Riak cluster are also running
> Couchebase, N
gt;
> +33(0)6 11 89 13 84
> http://www.linkedin.com/in/godefroy
> http://twitter.com/Godefroy
>
>
> 2013/3/21 Evan Vigil-McClanahan
>>
>> Handoff is done by default on port 8099.
>>
>> I guess what I am getting at here is that this doesn't look like an
&g
Handoff is done by default on port 8099.
I guess what I am getting at here is that this doesn't look like an
obvious riak problem, it's more likely that something on your network
or on your nodes is closing or interrupting those sockets; you'd most
likely get a different error if something interna
p "exited abnormally" /var/log/riak/console.log
> (nothing)
>
>
>
> --
> Godefroy de Compreignac
>
> Eklaweb CEO - www.eklaweb.com
> EklaBlog CEO - www.eklablog.com
>
> +33(0)6 11 89 13 84
> http://www.linkedin.com/in/godefroy
> http://twitter.com/God
r
> partition 970528431040052719119634459225656692740267704320 exited after
> processing 138503 objects
> 2013-03-20 23:16:01.827 [info]
> <0.29612.1647>@riak_core_handoff_sender:start_fold:126 Starting
> ownership_handoff transfer of riak_kv_vnode from 'riak@5.39.68.152'
Godefroy,
It does look like some things are in progress, but it's possible that
there are failures that are keeping your partitions from handing off.
If you grep through your console.log files for 'handoff', do you see
any abnormal exits or other failures?
On Wed, Mar 20, 2013 at 3:22 PM, Godefr
rrect or in time
>
> Am 20.03.2013 17:15 schrieb "Evan Vigil-McClanahan" :
>
>> typically you get that error when one of the connections doesn't
>> return for an entire reporting interval. Is this happening on every
>> request?
>>
>> Note that fo
Also, in the mean time, adding +swt very_low to your vm.args can help
lessen the incidence of this issue.
On Tue, Mar 19, 2013 at 7:41 AM, Ingo Rockel
wrote:
> and the riak-users mailer-daemon should really set a "reply-to"...
>
> Original-Nachricht
> Betreff: Re: riak cluster s
ux systems).
You may also want to raise the ERL_MAX_PORTS value in your vm.args
On Mon, Jan 21, 2013 at 12:14 PM, Daniel Gerep wrote:
> Can you tell me how? I have no idea and thanks for your answer.
>
>
> On 21 January 2013 16:13, Evan Vigil-McClanahan
> wrote:
>>
>>
Daniel,
You need to raise the number of file handles that riak is allowed to use.
On Mon, Jan 21, 2013 at 11:03 AM, Daniel Gerep wrote:
> Hi all.
>
> I'm inserting 2.000 values at a time.
>
> It inserts something around 730 - 783 record and start showing this error on
> riak console
>
> I have c
Note that this uses the luke mapreduce subsystem, which isn't a good
idea as it's been deprecated going forward.
Unfortunately there doesn't seem to be a way that's as simple to do
pipe mapreduce jobs. I'll ping people and see if we can't get you all
some code that's slightly easier to use.
On F
Hi Sally.
That error can sometimes be misleading. It's quite likely that
there's nothing wrong with your app.config, but there is something
wrong with nodetool.
What is the output when you run:
bash -x riak chkconfig
?
On Wed, Jan 16, 2013 at 9:25 PM, Sally Lehman wrote:
> Hi,
>
> I've spent
of the box. There is an ELB in front of the ring with SSL on
> the ELB and on passed through to the ring, if that's significant.
>
> Nothing special that I can identify. I can attempt to instrument our code
> further...
>
> On Jan 9, 2013, at 10:05 PM, Evan Vigil-McClana
The last time I saw this particular error it was someone on a 64bit
client setting the content length value incorrectly via libcurl.
Requests would work on 64 bit nodes but fail on 32 bit nodes,
presumably because the HTTP client was handing the socket a garbage
value to read, thereby killing it.
It looks to me like your error is here:
> filters = key_filter.tokenize(":", 4) + (key_filter.starts_with('20121223')
> and key_filter.string_to_int().less_than(2012122423))
The 'and' there is getting interpreted as a logical and:
>>> key_filter.starts_with('20121223') and
>>> key_filter.strin
Jimmy, if you're trying to install via pip, you need to pip install
protobuf first to get things to work. There's a patch in the works
that should fix this shortly.
On Thu, Dec 20, 2012 at 10:26 PM, Adron Hall wrote:
> Hey James,
>
> Is the http://basho.github.com/riak-python-client/ what you're
To get the owner of a particular partition in a given cluster, attach
to one of its nodes (always be sure to disconnect with control-d!) or
use a remsh, then enter the following:
{ok, Ring} = riak_core_ring_manager:get_my_ring().
riak_core_ring:index_owner(Ring, 0).
replacing 0 here with the inde
4/bin/riak start
> Error reading /home/u1c332/riak-1.2.1/dev/dev4/etc/app.config
> Running Erlang ok
>
>
>
>
> -Original Message-
> From: Evan Vigil-McClanahan [mailto:emcclana...@basho.com]
> Sent: Wednesday, December 19, 2012 4:00 PM
> To: Chenini, Mohamed; r
Err, this should go to the list as well.
On Wed, Dec 19, 2012 at 3:59 PM, Evan Vigil-McClanahan
wrote:
> 1.2.1 is meant to be built with R15B01 with HIPE disabled.
>
> As for that error, it looks like erlang is producing some unexpected
> output (the 'Running Erlang' part).
It looks like mochijson2 has an assumption that anything that looks
like a proplist (a list of 2-tuples) gets turned into a hash.
7> list_to_binary(mochijson2:encode([{a , b}, {foo, bar}])).
<<"{\"a\":\"b\",\"foo\":\"bar\"}">>
This is likely something that should be documented, but it doesn't
see
There are cases where tombstones are deleted very slowly. Since you
could get one at any time (unless you never delete objects), you need
to write your mapreduce functions to skip over tombstones.
On Fri, Dec 7, 2012 at 10:48 AM, Daniil Churikov wrote:
> Ok, but how fast objects really deleted?
That error is from a riak object tombstone being included in the
results stream. You need to check the object metadata for the
<<"X-Riak-Deleted">> header being true, and then ignore that object in
your map function.
On Fri, Dec 7, 2012 at 10:01 AM, Daniil Churikov wrote:
> Hello, recently we ha
You could implement that a couple of ways, but I am not sure how
useful it would be. Would you want this for something like
prefetching of large objects that you suspect would block a thread of
execution if fetched synchronously?
On Fri, Nov 2, 2012 at 11:12 AM, Shuhao Wu wrote:
> Hi,
>
> Just a
This is an issue with our packaging.
https://github.com/basho/node_package/issues/26
Riak starts without being configured, which writes ring file with the
defaults. When you restart it with your proper config, you get this
error.
There's also a core issue to make the error more clear:
https://
Dave,
64 is fine for a 6 node cluster. Rune gives a great rundown of the
downsides of large rings on small numbers of machines in his post.
Usually our recommendation is for ~10 ring partitions per physical
machine, rounded up to the next power of two. Where did you see the
recommendation for 51
lang:nodes() on other nodes). I would suspect a bug in LevelDB, but people
> are using it in production, aren't they?
>
> I intend to retry the test without the software RAID. Any other hints?
>
> Best regards, Jan
>
> -- Původní zpráva --
> Od: Evan Vi
thing riak is usually happy with"
>>
>> So if we build Riak from source we MUST use R15B01 (and not R15B02?!) and
>> MUST disable Hipe? Why? Isn't Hipe normally on by default? What problems
>> does it cause Riak? Is the need to disable Hipe mentioned anywhere?
1 - 100 of 108 matches
Mail list logo