cated on the same hosts.
Remember that Pig code runs inside of your hadoop cluster, and connects to
Cassandra as the Database engine.
I have not done any testing with Hive, so someone else will have to answer
that question.
From: mailto:cscetbon@orange.com>>
Reply-To: mailto:user@ca
; http://www.thelastpickle.com
>>>>>>>
>>>>>>> On 18/01/2013, at 7:48 AM, James Lyons wrote:
>>>>>>>
>>>>>>>> Silly question -- but does hive/pig hadoop etc work with cassandra
>>>>>>>> 1.1.8? Or o
ts.
Remember that Pig code runs inside of your hadoop cluster, and connects to
Cassandra as the Database engine.
I have not done any testing with Hive, so someone else will have to answer
that question.
From: mailto:cscetbon@orange.com>>
Reply-To: mailto:user@cassandra.apache.org>>
, so someone else will have to answer
that question.
From: mailto:cscetbon@orange.com>>
Reply-To: mailto:user@cassandra.apache.org>>
Date: Thursday, January 17, 2013 8:58 AM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.
17, 2013 8:58 AM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: Pig / Map Reduce on Cassandra
Jimmy,
I understand that CFS can replace HDFS for those who use Hadoop. I just want
to use pig and hive on cas
apache.org>>
Date: Thursday, January 17, 2013 8:58 AM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: Pig / Map Reduce on Cassandra
Jimmy,
I understand that CFS can replace HDFS for those who use Hadoop. I just
e on cassandra. I know that pig samples are provided and
>>>> work now with cassandra natively (they are part of the core). However, does
>>>> it mean that the process will be spread over nodes with
>>>> number_of_mapper=number_of_nodes or something like that ?
t;>
Date: Thursday, January 17, 2013 8:58 AM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: Pig / Map Reduce on Cassandra
Jimmy,
I understand that CFS can replace HDFS for those who use Hadoop. I just want
to u
ne any testing with Hive, so someone else will have to answer
>> that question.
>>
>>
>> From:
>> Reply-To:
>> Date: Thursday, January 17, 2013 8:58 AM
>> To: "user@cassandra.apache.org"
>> Subject: Re: Pig / Map Reduce on Cassandra
>>
any testing with Hive, so someone else will have to answer
> that question.
>
>
> From:
> Reply-To:
> Date: Thursday, January 17, 2013 8:58 AM
> To: "user@cassandra.apache.org"
> Subject: Re: Pig / Map Reduce on Cassandra
>
> Jimmy,
>
> I understand tha
n/closer.cgi?path=/cassandra/1.2.0/apache-cassandra-1.2.0-src.tar.gz
--Jimmy
From: mailto:cscetbon@orange.com>>
Reply-To: mailto:user@cassandra.apache.org>>
Date: Thursday, January 17, 2013 6:35 AM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mail
someone else will have to answer
that question.
From:
Reply-To:
Date: Thursday, January 17, 2013 8:58 AM
To: "user@cassandra.apache.org"
Subject: Re: Pig / Map Reduce on Cassandra
Jimmy,
I understand that CFS can replace HDFS for those who use Hadoop. I just want
to use pig a
-src.tar.gz
--Jimmy
From: mailto:cscetbon@orange.com>>
Reply-To: mailto:user@cassandra.apache.org>>
Date: Thursday, January 17, 2013 6:35 AM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: Pi
s/pig --
http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.0/apache-cassandra-
1.2.0-src.tar.gz
--Jimmy
From:
Reply-To:
Date: Thursday, January 17, 2013 6:35 AM
To: "user@cassandra.apache.org"
Subject: Re: Pig / Map Reduce on Cassandra
what do you mean ? it's not need
what do you mean ? it's not needed by Pig or Hive to access Cassandra data.
Regards
On Jan 16, 2013, at 11:14 PM, Brandon Williams
mailto:dri...@gmail.com>> wrote:
You won't get CFS,
but it's not a hard requirement, either.
_
On Wed, Jan 16, 2013 at 2:37 PM, wrote:
> Here is the point. You're right this github repository has not been updated
> for a year and a half. I thought brisk was just a bundle of some technologies
> and that it was possible to install the same components and make them work
> together without
Here is the point. You're right this github repository has not been updated for
a year and a half. I thought brisk was just a bundle of some technologies and
that it was possible to install the same components and make them work together
without using this bundle :(
On Jan 16, 2013, at 8:22 PM
Brisk is pretty much stagnant. I think someone forked it to work with 1.0
but not sure how that is going. You'll need to pay for DSE to get CFS
(which is essentially Brisk) if you want to use any modern version of C*.
Best,
Michael
On 1/16/13 11:17 AM, "cscetbon@orange.com"
wrote:
>Thanks I
Thanks I understand that your code uses the hadoop interface of Cassandra to be
able to read from it with a job. However I would like to know how to bring
pieces (hive + pig + hadoop) together with cassandra as the storage layer, not
to get code to test it. I have found repository
https://githu
Try this one then, it reads from cassandra, then writes back to cassandra,
but you could change the write to where ever you would like.
getConf().set(IN_COLUMN_NAME, columnName );
Job job = new Job(getConf(), "ProcessRawXml");
job.setInputFormatClass(Colum
I don't want to write to Cassandra as it replicates data from another
datacenter, but I just want to use Hadoop Jobs (Pig and Hive) to read data from
it. I would like to use the same configuration as
http://www.datastax.com/dev/blog/hadoop-mapreduce-in-the-cassandra-cluster but
I want to know i
Here are a few examples I have worked on, reading from xml.gz files then
writing to cassandara.
https://github.com/jschappet/medline
You will also need:
https://github.com/jschappet/medline-base
These examples are Hadoop Jobs using Cassandra as the Data Store.
This one is a good place to st
22 matches
Mail list logo