> Is it good practice then to find an attribute in my data that would allow me
> to form wide row row keys with aprox. 1000 values each?
You can do that using get_range_slice() via thrift.
And via CQL 3 you use the token() function and Limit with a select statement.
Check the DS docs for more in
Aaron,
On 12.08.2013, at 23:17, Aaron Morton wrote:
>> As I do not have Billions of input records (but a max of 10 Milllion) the
>> added benefit of scaling out the per-line processing is probably not worth
>> the additional setup and operations effort of Hadoop.
> I would start with a regul
> As I do not have Billions of input records (but a max of 10 Milllion) the
> added benefit of scaling out the per-line processing is probably not worth
> the additional setup and operations effort of Hadoop.
I would start with a regular app and then go to hadoop if needed, assuming you
are on
> Is it possible to use CL_ONE with hadoop/cassandra when doing an M/R job?
That's the default.
https://github.com/apache/cassandra/blob/cassandra-1.2/src/java/org/apache/cassandra/hadoop/ConfigHelper.java#L383
> And more importantly is there a way to configure that such that if my RF=3,
> that
It's an inter node timeout waiting for the read to complete. Normally means the
cluster is overloaded in some fashion, check for GC activity and/or overloaded
IOPs.
If you reduce the batch_size it should help.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
I use Cassandra 1.2.2 and Hadoop 1.0.4
2013/3/11 Renato Marroquín Mogrovejo
> Hi there,
>
> Check this out [1]. It´s kinda old but I think it will help you get
> started.
>
>
> Renato M.
>
> [1] http://www.datastax.com/docs/0.7/map_reduce/hadoop_mr
>
> 2013/3/11 oualid ait wafli :
> > Hi
> >
>
Hi there,
Check this out [1]. It´s kinda old but I think it will help you get started.
Renato M.
[1] http://www.datastax.com/docs/0.7/map_reduce/hadoop_mr
2013/3/11 oualid ait wafli :
> Hi
>
> I need a tutorial for deployong Hadoop+Cassandra on single-nodes
>
> Thanks
I would first look at http://wiki.apache.org/cassandra/HadoopSupport - you'll
want to look in the section on cluster configuration. DataStax also has a
product that makes it pretty simple to use Hadoop with Cassandra if you don't
mind paying for it - http://www.datastax.com/products/enterprise
thanks Jeremy, its good pointer to start with
regards
Sagar
From: Jeremy Hanna [jeremy.hanna1...@gmail.com]
Sent: Thursday, March 17, 2011 7:34 PM
To: user@cassandra.apache.org
Subject: Re: hadoop cassandra
You can start with a word count example that
You can start with a word count example that's only for hdfs. Then you can
replace the reducer in that with the ReducerToCassandra that's in the cassandra
word_count example. You need to match up your Mapper's output to the Reducer's
input and set a couple of configuration variables to tell it
10 matches
Mail list logo