I know somethink - there are some examples of MR in java client 2.0 - which
does not work on my machine.
Although MR by HTTP is working fine.
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Java-MR-tp4032117p4032205.html
Sent from the Riak Users mailing list archive at
>
> On Dec 3, 2014, at 12:38 PM, niedomnie wrote:
>
> I know somethink - there are some examples of MR in java client 2.0 - which
> does not work on my machine.
> Although MR by HTTP is working fine.
Can you provide the example of the jobs which are failing as well as the output?
Thanks,
Chris
no results (prints 0) on below code (using 2.0 java client from maven, and
connecting to riak 2.0),
I've changed map/reduce functions JS or Erlang, used different MapReduce
classes (for Bucket, BucketKey, or Index one) but without any luck.
It is written in groovy - but I do not think that it diffe
> On Dec 3, 2014, at 1:04 PM, niedomnie wrote:
>
> no results (prints 0) on below code (using 2.0 java client from maven, and
> connecting to riak 2.0),
> I've changed map/reduce functions JS or Erlang, used different MapReduce
> classes (for Bucket, BucketKey, or Index one) but without any luck
Before you comment that data is not stored in DB - check it first. It is.
And HTTP map reduce gives results. So this is false trace.
Data is in data base, client 1.4 is able to run MR job and fetch that data
(written to default bucket type), HTTP client used by curl is running job &
fetching data.
And I've verified this example without futures (with execute() function) -
does not working as well.
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Java-MR-tp4032117p4032210.html
Sent from the Riak Users mailing list archive at Nabble.com.
_
> On Dec 3, 2014, at 1:15 PM, niedomnie wrote:
>
> Before you comment that data is not stored in DB - check it first. It is.
> And HTTP map reduce gives results. So this is false trace.
> Data is in data base, client 1.4 is able to run MR job and fetch that data
> (written to default bucket type)
Always Groovy. Not tried with java. But client 1.4 is working. And I've never
heard that Groovy could possibly interfere.
I will check it with plain java but do not expect much.
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Java-MR-tp4032117p4032212.html
Sent from the
> On Dec 3, 2014, at 1:24 PM, niedomnie wrote:
>
> Always Groovy. Not tried with java. But client 1.4 is working. And I've never
> heard that Groovy could possibly interfere.
> I will check it with plain java but do not expect much.
>
If you could sent a failing vanilla Java MR job, that woul
Verified in Java - full code below - does not work - entry/key is in Riak -
and
curl -s -X POST -H "Content-Type: application/json"
http://localhost:8098/mapred -d
'{"inputs":{"bucket":"bucket2","key_filters":[["starts_with", "key"]]},
"query":[{"map":{"language":"javascript", "source":"function(v
this example does not work because I have disabled phase to output,
but I have changed this many times and I will find another example which
does not work,
but tomorrow
now thank Chirs for help
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Java-MR-tp4032117p4032215.h
So I've verified that simple code in Java(1.7) is working fine, but in Groovy
(2.3.6) it is not.
And problem is only in Java Client 2.0. Working with 1.4 is doing fine in
Groovy and Java as well.
Everything is run from Intelij (does not know if matters).
I am astonished. Till now I've tested only o
Guys, should i worry about creating many process identifier in my app vai
riak_pb_socket:start,start_link? I try to find on documentation section but
i couldn't find a part that explicitly stated that riak will close some
inactive process.
Thanks.
___
ri
Good morning Riak-Users
Last night one of the nodes in my 5 node RiakCS cluster went haywire and shot
up to +90% disk i/o utilization seemingly out of the blue.
Looking at the riak error.log I saw the following being continuously written.
2014-12-02 21:57:13.220 [error] <0.29210.3089> CRASH REP
Hi Alex.
It looks like you exceeded the files ulimit. Information on how to fix is here
http://docs.basho.com/riak/latest/ops/tuning/open-files-limit/#Changing-the-limit
Jon
> On Dec 3, 2014, at 7:15 AM, Alex Millar wrote:
>
> Good morning Riak-Users
>
> Last night one of the nodes in my 5
Thanks Jon.
I had thought I had the ulimit bumped up and will need to do some more reading
on this.
Is it possible a node could have had dangling file descriptor references?
(Effectively no “garbage collection” happening and thus this was just a tipping
point)
I’m assuming the more likely ca
It's most likely you haven't increased from the default, the backends in
Riak require a large number of file descriptors. If you see a repeat of
the problem, please can you run lsof against the beam.smp process and
provide a full listing of all of the files under the leveled and bitcask
directorie
Yang,
Please use the default schema to create your own to fit your needs:
https://github.com/basho/yokozuna/blob/develop/priv/default_schema.xml
More information is available in the documentation:
http://docs.basho.com/riak/latest/dev/advanced/search-schema/
--
Luke Bakken
Engineer / CSE
lbak.
> On Dec 3, 2014, at 2:09 PM, Taufan Adhitya wrote:
>
> Guys, should i worry about creating many process identifier in my app vai
> riak_pb_socket:start,start_link? I try to find on documentation section but i
> couldn't find a part that explicitly stated that riak will close some
> inactive p
Thanks for the information, appreciated.
Taufan.
2014-12-03 23:52 GMT+07:00 Christopher Meiklejohn :
> > On Dec 3, 2014, at 2:09 PM, Taufan Adhitya
> wrote:
> >
> > Guys, should i worry about creating many process identifier in my app
> vai riak_pb_socket:start,start_link? I try to find on docu
I've noticed same problem (with $key index), and by overriding (many 3
classes) you can overcome this error - but it is awkward
better solution should be delivered
PS. I've reported bug on github riak-java-client
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Using-b
How should look inputs for mapreduce job in HTTP
something like this is working on default bucket_type
curl -s -X POST -H "Content-Type: application/json"
http://localhost:8098/mapred -d
'{"inputs":{"bucket":"history100","key_filters":[["starts_with", "2014"]]},
"query":[{"map":{"language":"javas
Ok, I've just found
curl -s -X POST -H "Content-Type: application/json"
http://localhost:8098/mapred -d '{"inputs":{"bucket":["bucket_type",
"bucket"],"key_filters":[["starts_with", "2014"]]},
"query":[{"map":{"language":"javascript", "name":"Riak.mapValues"}} ]}'
--
View this message in cont
23 matches
Mail list logo