I have the requirement trying to support in hive, not sure if it is doable.
I have the hadoop 1.1.1 with Hive 0.9.0 (Using deby as the meta store)
If I partition my data by a dt column, so if my table 'foo' have some
partitions like 'dt=2013-07-01' to 'dt=2013-07-30'.
Now the user want to query al
conf while hive client is running. #hive -hiveconf
hive.root.logger=ALL,console -e " DDL statement ;"#hive -hiveconf
hive.root.logger=ALL,console -f ddl.sql ; Hope this helps
Thanks
On Mar 20, 2013, at 1:45 PM, java8964 java8964 wrote:Hi,
I have the hadoop running in pseudo-distributed mode on
Hi,
Hive 0.9.0 + Elephant-Bird 3.0.7
I faced a problem to use the elephant-bird with hive. I know what maybe cause
this problem, but I don't know which side this bug belongs to. Let me know
explain what is the problem.
If we define a google protobuf file, with field name like 'dateString' (the
This is in HIVE-0.9.0
hive> list
jars;/nfs_home/common/userlibs/google-collections-1.0.jar/nfs_home/common/userlibs/elephant-bird-hive-3.0.7.jar/nfs_home/common/userlibs/protobuf-java-2.3.0.jar/nfs_home/common/userlibs/elephant-bird-core-3.0.7.jarfile:/usr/lib/hive/lib/hive-builtins-0.9.0-cdh4.1.
Hi,
I have a hive table which uses the jar file provided from the elephant-bird,
which is a framework integrated between lzo and google protobuf data and
hadoop/hive.
If I use the hive command like this:
hive --auxpath path_to_jars, it works fine to query my table,
but if I use the add jar aft
can access them just by their name in your code.
>
> About #2, doesn't sound normal to me. Did you figure that out or still
> running into it?
>
> Mark
>
> On Thu, Dec 20, 2012 at 5:01 PM, java8964 java8964
> wrote:
> > Hi, I have 2 questions related to the h
Actually I am backing up this question. In additional for that, I wonder if it
is possible we can access the table properties from the UDF too.
I also have XML data, but with namespace into it. The XPATH UDF coming from
HIVE doesn't support namespace. To support the namespace in XML is simple, j
ou are
> welcome to do so by creating a JIRA and posting a patch. UDFs are an
> easy and excellent way to contribute back to the Hive community.
>
> Thanks!
>
> Mark
>
> On Wed, Dec 19, 2012 at 8:52 AM, java8964 java8964
> wrote:
> > Hi, I have a question related to
Hi, I have a question related to the XPATH UDF currently in HIVE.
>From the original Jira story about this UDF:
>https://issues.apache.org/jira/browse/HIVE-1027, It looks like the UDF won't
>support namespace in the XML, is that true?
Any later HIVE version does support namespace, if so, what i
optimize.cp=false;
> set hive.optimize.ppd=false;
>
> 2012/12/13 java8964 java8964 :
> > Hi,
> >
> > I played my query further, and found out it is very puzzle to explain the
> > following behaviors:
> >
> > 1) The following query works:
> >
> > select
Hi,
I played my query further, and found out it is very puzzle to explain the
following behaviors:
1) The following query works:
select c_poi.provider_str, c_poi.name from (select darray(search_results,
c.rank) as c_poi from nulf_search lateral view explode(search_clicks)
clickTable as c) a
I g
OK.
I followed the hive source code of
org.apache.hadoop.hive.ql.udf.generic.GenericUDFArrayContains and wrote the
UDF. It is quite simple.
It works fine as I expected for simple case, but when I try to run it under
some complex query, the hive MR jobs failed with some strange errors. What I
Hi, In our project to use the HIVE on CDH3U4 release (Hive 0.7.1), I have a
hive table like the following:
Table foo ( search_results array>
search_clicks array>)
As you can see, the 2nd column, which represents the list of search results
clicked, contains the index location of which result
Hi,
Our company current is using CDH3 release, which comes with Hive 0.7.1.
Right now, I have the data coming from another team, which also provides the
custom InputFormat and RecorderReader, but using the new mapreduce API.
I am trying to build a hive table on these data, and hope I can reuse t
This is not a hive but a SQL question.
You need to be more clear about your data, and try to think a way to solve your
problem. Without the detail about your data, no easy way to answer your
question.
For example, just based on your example data you provide, does the 'abc' and
'cde' only happen
If you don't need to join current_web_page and previous_web_page, assuming you
can just trust the time stamp, as Phil points out, an custom UDF of
collect_list() is the way to go.
You need to implement collect_list() UDF by yourself, hive doesn't have one by
default.But it should be straight fo
Hi,
I am trying to implement a UDAF of Kurtosis (<�a
href="http://en.wikipedia.org/wiki/Kurtosis";>http://en.wikipedia.org/wiki/Kurtosis<�/a>
in the hive.
I already found a library to do it, from Apache commons math (<�a
href="http://commons.apache.org/math/apidocs/org/apache/commons/math/stat
2012 at 4:17 AM, java8964 java8964 wrote:
Hi, I am using Cloudera release cdh3u3, which has the hive 0.71 version.
I am trying to write a hive UDF function as to calculate the moving sum. Right
now, I am having trouble to get the constrant value passed in in the
initialization stage.
For exampl
Hi, I am using Cloudera release cdh3u3, which has the hive 0.71 version.
I am trying to write a hive UDF function as to calculate the moving sum. Right
now, I am having trouble to get the constrant value passed in in the
initialization stage.
For example, let's assume the function is like the fo
Hi,
I have a question about the behavior of the class
org.apache.hadoop.hive.contrib.serde2.RegexSerDe. Here is the example I tested
using the Cloudra hive-0.7.1-cdh3u3 release. The above class did NOT do what I
expect, any one knows the reason?
user:~/tmp> more Test.javaimport java.io.*;impor
20 matches
Mail list logo