Hi,
AFAIK, you would need to use HCatalog APIs to read-from/write-to an
ORCFile. Please refer to
https://cwiki.apache.org/confluence/display/Hive/HCatalog+InputOutput
-Abhishek
On Tue, Apr 29, 2014 at 6:40 AM, Seema Datar wrote:
> Hi,
>
> I am trying to run an MR job to write files in ORC
Hi Nivedita,
Please send an email to user-subscr...@hive.apache.org to subscribe.
-Abhishek
On Mon, Apr 28, 2014 at 8:15 AM, Nivedhita Sathyamurthy <
nivedhitasat...@gmail.com> wrote:
>
>
Hello,
I am hitting an error while executing a Hive job inside MapReduce:
*Code snippet:*
String select1 = "SELECT a FROM abc";
driver.run(select1);
...
*Error:*
INFO exec.ExecDriver: Executing: /usr/local/hadoop-0.20.2/bin/*hadoop jar
null* org.apache.hadoop.hive.ql.exec.ExecDriver -plan
fi
ne command in your Pig script.
>
> Eugene
>
>
> On Sun, Apr 6, 2014 at 4:17 PM, Abhishek Girish wrote:
>
>> Hi,
>>
>> I am working on a custom Pig source code that writes RDF data into text
>> files. I was looking to instead *write to an ORCFile* for some of th
Hi,
I am working on a custom Pig source code that writes RDF data into text
files. I was looking to instead *write to an ORCFile* for some of the
columnar benefits it offers.
I understand that I need to use *HCatalog APIs*. I have an idea on how to
create HCatSchema for my data. And that I would