I got that -i option is not applicable to hive server
let me drill through to find out if there is any other option
on the other side to be curious , if you are writing the code to access
hiveserver, whats the harm defining these parameters in your code?
On Tue, Jun 12, 2012 at 12:12 PM, Sreenat
Like is there anyway to make the .hiverc file be executed even in
hiveserver instance.
simple way like this
hive --service hiveserver -i .hiverc
doesnot work Nithin
Any other way Nitin, I just want to add a single jar file and do not know
much about custom hive build. And this requirement may vary at some other
point of time. Its not a good way of building hive each time I need a new
jar to be added.
aniket, his problem is that he does not want to create that function each
time.
he wants it available on each session with hive server
so we are suggesting custom hive build where he will bundle his udf with
hive and have available with hive server
On Tue, Jun 12, 2012 at 11:58 AM, Aniket Mokashi
I mean every time you connect to hive server-
execute-
create temporary function...;
your hive query...;
~Aniket
On Mon, Jun 11, 2012 at 11:27 PM, Aniket Mokashi wrote:
> put jar in hive-classpath (libs directory etc) and do a create temporary
> function every time you connect from server.
>
>
put jar in hive-classpath (libs directory etc) and do a create temporary
function every time you connect from server.
What version of hive are you on?
~Aniket
On Mon, Jun 11, 2012 at 11:12 PM, Sreenath Menon
wrote:
> I have a jar file : 'twittergen.jar', now how can I add it to hive lib.
> Kind
I have a jar file : 'twittergen.jar', now how can I add it to hive lib.
Kindly help. I need the function to be used across sections when running a
server instance. Now stuck up with this.
Sorry for the late reply, Ashutosh.
Thanks for the pointers.
I will soon try it out with the hive version
-Sagar
On Sat, Jun 2, 2012 at 10:30 AM, Ashutosh Chauhan wrote:
> Hey Sagar,
>
> Seems like you have inserted data in your hbase table directly through
> hbase client and not through hive c
Hi, all
I have avg() problem on double and int. First, I run hive-0.7.0 on
hadoop-0.20.2
And I run avg(cast(var as int)) and avg(cast(var as double)), which give
different answer. The var is a integer stored as string. I also try
avg(var) directly which gives the same result as avg(cast(var
Hey Abhishek,
Hive manages your dist-cache automatically. What issue are you running
into when trying to map-join, that you wish to solve?
MapJoin docs can be found here:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Joins
P.s. Also moved this to user@hive.apache.org. BCC'd com
Matt – changing the DNS resolved the Hive errors, but led to other issues
which I'm afraid I can't remember right now. I just remember the change
broke something else, so the best course seemed to be to fix the metadata.
This of course doesn't mean you'll hit the same issue, but on the other
hand i
Hi Jon,
I've just encountered the same issue.
I was wondering if you would be so kind as to elaborate, on why you'd be
best off manipulating the metadata as opposed to trying to manipulate the
DNS?
I had a go at having the Namenode use a dns alias Namenode, then the hive
metadata did indeed conta
thank you.
On Mon, Jun 11, 2012 at 9:03 AM, Bejoy KS wrote:
> You just need to enable map joins before executing your join query.
> hive> Set hive.auto.convert.join=true
>
> Regards
> Bejoy KS
>
> Sent from handheld, please excuse typos.
>
> -Original Message-
> From: abhishek dodda
> Da
You just need to enable map joins before executing your join query.
hive> Set hive.auto.convert.join=true
Regards
Bejoy KS
Sent from handheld, please excuse typos.
-Original Message-
From: abhishek dodda
Date: Mon, 11 Jun 2012 08:59:49
To:
Reply-To: user@hive.apache.org
Subject: Hive
you can checkout hive code and build your udf and ship it with hive :)
custom hive for yourself ... if the function is generic feel free to share
on git :)
On Mon, Jun 11, 2012 at 9:28 PM, Sreenath Menon wrote:
> Ya UDF do not live across section. But what if I just want the temporary
> function
Ya UDF do not live across section. But what if I just want the temporary
function to be created each time of a new session. This is what is done
with help of .hiverc. But again this is working only with cli mode not in
server mode.
BTW I am interested to know how to build the function into hive, k
UDF's do not live across session. This is why the syntax is "CREATE
TEMPORARY FUNCTION". You can build the function into hive and then you
will not need to add the UDF.
On Mon, Jun 11, 2012 at 11:31 AM, Sreenath Menon
wrote:
> I have tried that before. It does not work. But anyways thanks for th
why not write big joins and generate a single query which will get the
expected results you want.
Or you can write queries and insert the intermediate data in temp tables
and clean them up once your execution is over
On Mon, Jun 11, 2012 at 6:23 PM, Cam Bazz wrote:
> Hello Nitin,
>
> yes, i wan
Nithin,
Any idea on invoking .hiverc when running : /usr/hive/bin/hive --service
hiveserver
This works when I am using Hive cli.
i.e. When I give: select link(tweet) from tweetsdata; in the cli and
defined the function 'link' in .hiverc
But when i run /usr/hive --service hiveserver
And use the function in PowerPivot, then it says that 'link' is not defined.
Hello Nitin,
yes, i want to write these results back to some rdbms, postgres, and i
had written some sort of merge program to handle text data into
postgres, but I will look at sqoop.
Currently there is no program, but I will either write a script, or one in java.
I will be making number of quer
If you want to write back these results to some rdbms then you can use
sqoop
if you want to save the results to some file, then just redirect the output
of query to somefile
can you tell how are you executing the hive query from your code? that will
be helpful to answer your question
On Mon, Jun
Hello,
I have finally wrote a program to upload my data to amazon s3, start a
cluster on amazon emr, and recover my partitions, and can issue simple
queries on hive.
now I would like to:
select count(*),itemSid from items group by itemSid <- gives me how
many times an item as viewed
and another
if you have created a file other than name ".hiverc" , you will need to
start hive with this file
something like hive -i hiverc
but when you create a file .hiverc in your home directory hive cli picks
it up automatically
On Mon, Jun 11, 2012 at 6:13 PM, Sreenath Menon wrote:
> K..so i have cre
K..so i have created a file 'sample.hiverc' in the home directory..how do I
run this particular file
in your home directory (if you are using linux with vm) then you will need
to create that file and add the entries exactly the same way you add in
hive cli
On Mon, Jun 11, 2012 at 6:06 PM, Sreenath Menon wrote:
> Hi Nitin
>
> Can u kindly help me (briefly) on how to add to hiverc...no such loca
Hi Nitin
Can u kindly help me (briefly) on how to add to hiverc...no such location
exsist in my machine
include it in your ~/.hiverc to have it across sessions
On Mon, Jun 11, 2012 at 5:42 PM, Sreenath Menon wrote:
> Hi
>
> I am using Hive with Microsoft PowerPivot as the visualization tool.
>
> When I am running a query involving UDF like this from PowerPivot:
> add jar /usr/local/hadoop/src/retwe
Hi
I am using Hive with Microsoft PowerPivot as the visualization tool.
When I am running a query involving UDF like this from PowerPivot:
add jar /usr/local/hadoop/src/retweetlink1.jar;
create temporary function link as 'retweetlink';
Followed by a select statement, the query executes fine for t
Thanks you, I didn't know about it.
Guillaume Polaert | Cyrès Conseil
De : Gabi D [mailto:gabi...@gmail.com]
Envoyé : lundi 11 juin 2012 12:14
À : user@hive.apache.org
Cc : Matouk Iftissen
Objet : Re: Trouble with sum function
float is known to have precision issues, because of the way it is imp
>From the code here
http://svn.apache.org/viewvc/hive/branches/branch-0.7/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFSum.java?view=markup
For float , doble and string the implementation points to common function
GenericUDAFSumDouble()
if (parameters[0].getCategory() != ObjectIn
float is known to have precision issues, because of the way it is
implemented. If you are working with money data you should definitely move
to double.
google 'float precision' and you'll find a bunch of explanations.
On Mon, Jun 11, 2012 at 12:49 PM, Guillaume Polaert wrote:
> Hi,
>
> We're expe
Hi,
We're expecting some issue with the sum function in Hive 0.7.1.
The precision of float number isn't correct (0.320484484676 instead of 0.32)
We aren't expecting this error with double format.
For instance, "select id, sum(col1), sum(col2) from test_table group by id"
returns incorrect values
33 matches
Mail list logo