Is there such thing as a configurable R value for MR jobs?
Elias
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
I suppose I should instead stream the list of keys to client, slice keys in
client, then fetch the objects, right?
On Nov 29, 2011 5:21 PM, "Jonathan Langevin"
wrote:
> When attempting to run m/r queries that execute Riak.reduceSlice to create
> paginated result sets, I've found an unexpected res
Ripple indicates that a representative from Basho will come to your home and
beat you to death if you perform a list keys in production. Is this still a
massive no-no with 2i?
I'm not familiar with how much is automatically indexed, so I guess I'm asking
if there is any point in my creating an
When attempting to run m/r queries that execute Riak.reduceSlice to create
paginated result sets, I've found an unexpected result.
For instance, if I call Riak.reduceSlice with start = 80, end = 85, which
you would expect to return 5 results (knowing that you have a total of 115
objects stored in
On Nov 29, 2011 5:08 AM, "John Axel Eriksson" wrote:
>
> Is it possible to incrementally add to a file in Luwak using PUT and the
Content-Range header. I just assumed that it was but I can't seem to
> get the expected results, it just overwrites whatever the key contents
were before. The reason I
Hi Walter,
It is not recommended to run more than 1 node per machine. Also, creating a
new node by copying/pasting an existing node is not a good idea.
Were you seeing "insufficient vnodes" before you tried adding nodes?
Was the node under any load? MapReduce, ListKeys, etc.
What does "riak-admin
Mac dev -> Ubuntu prod. Vm's on my own hardware or on linode.
@siculars on twitter
http://siculars.posterous.com
Sent from my iRotaryPhone
On Nov 28, 2011, at 20:37, Mark Phillips wrote:
> Afternoon, Evening, Morning to All -
>
> Here's a Recap to kick off the week: new codez, meetups, PDFs
6) Q --- Are people running riak natively on osx (for development) or
running on a vm that matches production? (from kenperkins via #riak)
A --- Anyone? (We had a similar thread on the list several months
back about this but I figured it couldn't hurt to open it up to more
discussion.)
We
here is the pastebin link.
http://pastebin.com/p8sk8WGi
Thanks
Suresh C Nair
From: Russell Brown
To: suresh chandran
Cc: "riak-users@lists.basho.com"
Sent: Tuesday, November 29, 2011 9:56 AM
Subject: Re: Multiple keys to fetch in a Java client
Oh, you re
Re-sending to list, since I sent to just Suresh in error.
Hi Suresh,
Thanks for the further information. So you don't add a map or reduce phase. It
is a bug[1] that the client allows you to execute that job since it is not
valid. Thanks for finding it. So 1st I need to fix that.
I'll also loo
Oh, you replied already.
Please can you use a gist or pastebin for larger blocks of code? They're much
easier to read.
On 29 Nov 2011, at 14:49, suresh chandran wrote:
> Hi Russel,
>
> After your mail, I altered the code as
>
> public static boolean fetchAll(String bucket, Collection keys,
>
Hi Russel,
After your mail, I altered the code as
public static boolean fetchAll(String bucket, Collection keys,
StringBuilder response)
{
Iterator valuesIter = keys.iterator();
try
{
IRiakClient iriakClient = RiakFactory.httpClient("http://127.0.0.1:8091/riak";;);
BucketKeyMapReduce reduce = ir
Hi Russel,
I am using a static method to get the values, which is like
public static boolean fetchAll(String bucket, Collection keys,
StringBuilder response)
{
PBClientConfig conf = new PBClientConfig.Builder()
.withHost("127.0.0.1")
.withPort(8091)
.build();
try
{
IRiakCl
Hi Suresh,
On 29 Nov 2011, at 12:10, suresh chandran wrote:
> Thanks Russel for the reply .
>
> After going through all the APIs I too reached the same opinion.
> 1)Simultaneous multiple fetch and 2)MapReduce. However, simultaneous access
> would make the fetch very slower and it doesnt look
On Tue, Nov 29, 2011 at 3:56 AM, Yehuda Zargrov
wrote:> .reduce("function(v) {var s={}; for(var i in v) { var
date=''; var> sum=0; for(var n in v[i]) { if (n === \"date\")
date=v[i][n]; if (n ===> \"sum\") sum=v[i][n]; } if (date in s)
s[date] += sum; else s[date] = sum; }> return[v]; }", :k
On Tue, Nov 29, 2011 at 6:07 AM, John Axel Eriksson wrote:
> Is it possible to incrementally add to a file in Luwak using PUT and the
> Content-Range header.
Hi, John. At this time, it is not possible to use Content-Range when
PUTting Luwak content.
> The reason I want to do this is because we
Thanks Russel for the reply .
After going through all the APIs I too reached the same opinion. 1)Simultaneous
multiple fetch and 2)MapReduce. However, simultaneous access would make the
fetch very slower and it doesnt look efficient ( since there are threads
spawned for each key and fetching sa
Kresten,
Thank you very much for the quick response and the confirmation of this use
case of Bitcask. I'll soon experiment with this setup.
Cheers,
Jeroen
On Tue, Nov 29, 2011 at 11:28 AM, Kresten Krab Thorup wrote:
> Jeroen,
>
> You can run multiple bitcask backends using the multi_backend, an
Is it possible to incrementally add to a file in Luwak using PUT and the
Content-Range header. I just assumed that it was but I can't seem to
get the expected results, it just overwrites whatever the key contents were
before. The reason I want to do this is because we have some pretty
large files I
Jeroen,
You can run multiple bitcask backends using the multi_backend, and configure
them differently (one with a timeout and one without). That's what we do when
we need this. The only issue is that you need to watch the number of file
descriptors, since even one bitcask is pretty fd-hungry
Hi all,
I'm currently investigating of how to structure my data in Riak. I'm
thinking of having buckets that have the purpose of storing the raw data
and having buckets that store certain views on this data to minimize
lookups and mapreduce operations at runtime. So the latter would in effect
be a
Hi All,
We are facing a very strange map-reduce behavior.
We use ripple in ruby, this is the call:
def yehuda
query_result = Riak::MapReduce.new(Ripple.client).add('usage-test1')
.map("function(v) { data = JSON.parse(v.values[0].data).loc; for (var a in
data) { for (var b in data[a])
Hi All,
We are facing a very strange map-reduce behavior.
We use ripple in ruby, this is the call:
def yehuda
query_result = Riak::MapReduce.new(Ripple.client).add('usage-test1')
.map("function(v) { data = JSON.parse(v.values[0].data).loc; for (var a in
data) { for (var b in data[a])
23 matches
Mail list logo