Somehow (by resetting folder permissions) I got rid of the below error.
But now I'm getting a new error as below. Looks like I'm missing some
configuration, but not sure what and where.
/hive> select count(1) from table1;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks
Sorry for late reply.
For anything which you want to run as MAP and REDUCE you have to extend map
reduce classes for your functionality irrespective of language (Java, python or
any other). Once you have extended class move the jar to the Hadoop cluster.
Bertrand has also mention about reflecti
I am interested in the response of Dilip.
I didn't look at the details and stopped my proof of concept using hive
with jdbc due to the thrift server not being 'concurrent safe'.
Of course, like Dilip said, the driver call can be done directly from java
(or any languages having a binding to it). S
You don't necessarily need to run the thrift service to use JDBC. Please
see:
http://csgrad.blogspot.in/2010/04/to-use-language-other-than-java-say.html.
Dilip
On Tue, Sep 25, 2012 at 11:01 AM, Abhishek wrote:
> Hi Zhou,
>
> Thanks for the reply, we are shutting down thrift service due to secur
But remember that you are running on parallel machines. Depending on the
hardware configuration, more map tasks is BETTER.
From: John Omernik [j...@omernik.com]
Sent: Tuesday, September 25, 2012 7:11 PM
To: user@hive.apache.org
Subject: Re: Hive File Sizes, Mergi
Isn't there an overhead associated with each map task? Based on that, my
hypothesis is if I pay attention to may data, merge up small files after
load, and ensure split sizes are close to files sizes, I can keep the
number of map tasks to an absolute minimum.
On Tue, Sep 25, 2012 at 2:35 PM, Con
Why do you think the current generated code is inefficient?
From: John Omernik [mailto:j...@omernik.com]
Sent: Tuesday, September 25, 2012 2:57 PM
To: user@hive.apache.org
Subject: Hive File Sizes, Merging, and Splits
I am really struggling trying to make hears or tails out of how to optimize t
Hi, I am using Cloudera release cdh3u3, which has the hive 0.71 version.
I am trying to write a hive UDF function as to calculate the moving sum. Right
now, I am having trouble to get the constrant value passed in in the
initialization stage.
For example, let's assume the function is like the fo
I am really struggling trying to make hears or tails out of how to optimize
the data in my tables for best query times. I have a partition that is
compressed (Gzip) RCFile data in two files
total 421877
263715 -rwxr-xr-x 1 darkness darkness 270044140 2012-09-25 13:32 00_0
158162 -rwxr-xr-x 1
Hi Doug,
Thanks for the reply.Can you point me to the CLI code.
Regards
Abhi
Sent from my iPhone
On Sep 25, 2012, at 1:53 PM, Doug Houck wrote:
> Also, this is all open source, right? So you could take a look at the CLI
> code to see how it does it.
>
> - Original Message -
> From
Hi Zhou,
Thanks for the reply, we are shutting down thrift service due to security
issues with hive.
Regards
Abhi
Sent from my iPhone
On Sep 25, 2012, at 1:53 PM, Doug Houck wrote:
> Also, this is all open source, right? So you could take a look at the CLI
> code to see how it does it.
>
Also, this is all open source, right? So you could take a look at the CLI code
to see how it does it.
- Original Message -
From: "Abhishek"
To: user@hive.apache.org
Cc: user@hive.apache.org
Sent: Tuesday, September 25, 2012 1:44:41 PM
Subject: Re: How connect to hive server without usin
the page also contains information about using other APIs to connect to
Hive.
On Tue, Sep 25, 2012 at 1:44 PM, Abhishek wrote:
> Hi Zhou,
>
> But Iam looking to connect to hive server without jdbc. Some other way
> through API
>
> Regards
> Abhi
>
> Sent from my iPhone
>
> On Sep 25, 2012, at 1:
Hi Zhou,
But Iam looking to connect to hive server without jdbc. Some other way through
API
Regards
Abhi
Sent from my iPhone
On Sep 25, 2012, at 1:15 PM, Haijia Zhou wrote:
> https://cwiki.apache.org/Hive/hiveclient.html#HiveClient-JDBC
>
> On Tue, Sep 25, 2012 at 1:13 PM, Abhishek wrote:
https://cwiki.apache.org/Hive/hiveclient.html#HiveClient-JDBC
On Tue, Sep 25, 2012 at 1:13 PM, Abhishek wrote:
> Hi all,
>
> Is there any way to connect to hive server through API??
>
> Regards
> Abhi
>
>
>
> Sent from my iPhone
>
Hi all,
Is there any way to connect to hive server through API??
Regards
Abhi
Sent from my iPhone
For java, you can also consider reflection :
http://hive.apache.org/docs/r0.9.0/udf/reflect.html
Regards
Bertrand
On Tue, Sep 25, 2012 at 3:18 PM, Tamil A <4tamil...@gmail.com> wrote:
> Hi Manish,
>
> Thanks for your help.I did the same using UDF.Now trying with
> Transform,Map and Reduce claus
Thanks Manish. ll try with the same.
Thanks & Regards,
Manu
On Tue, Sep 25, 2012 at 5:19 PM, Manish.Bhoge wrote:
> Manu,
>
> ** **
>
> If you have written UDF in Java for Hive then you need to copy your JAR on
> your Hadoop cluster in /usr/lib/hive/lib/ folder to hive to
Hi Manish,
Thanks for your help.I did the same using UDF.Now trying with Transform,Map
and Reduce clauses.so is it mean by using java we have to goahead through
UDF and for other languages using MapReduce Scripts i.e., the Transform,Map
and Reduce clauses.
Please correct me if am wrong.
Thanks
The jar is being looked on HDFS as the exception suggests. Run the following
commands:
$ hadoop fs -mkdir /usr/lib/hive/lib
$ hadoop fs -put $HIVE_HOME/lib/hive-builtins-0.8.1-cdh4.0.1.jar
/usr/lib/hive/lib
Your queries should work now.
On Sep 25, 2012, at 6:46 AM, Manish.Bhoge wrote:
> Sara
Manu,
If you have written UDF in Java for Hive then you need to copy your JAR on your
Hadoop cluster in /usr/lib/hive/lib/ folder to hive to use this JAR.
Thank You,
Manish
From: Manu A [mailto:hadoophi...@gmail.com]
Sent: Tuesday, September 25, 2012 3:44 PM
To: user@hive.apache.org
Subject: Cu
Sarath,
Is this the external table where you have ran the query? How did you loaded the
table? Because it looks like the error is about the file related to table than
CDH Jar.
Thank You,
Manish
From: Sarath [mailto:sarathchandra.jos...@algofusiontech.com]
Sent: Tuesday, September 25, 2012 3:4
22 matches
Mail list logo