I want to pring lineage info of sql. I found this jira
https://issues.apache.org/jira/browse/HIVE-1131 . How to use it?
r7raul1...@163.com
Yes it is happening for hue only, can u plz suggest how i cleaning up hue
session from server ?
The query is succeed in hive command line.
On Fri, May 15, 2015 at 11:52 AM, Nitin Pawar
wrote:
> Is this happening for Hue?
>
> If yes, may be you can try cleaning up hue sessions from server. (thi
Is this happening for Hue?
If yes, may be you can try cleaning up hue sessions from server. (this may
clean all users active sessions from hue so be careful while doing it)
On Fri, May 15, 2015 at 11:31 AM, amit kumar wrote:
> i am using CDH 5.2.1,
>
> Any pointers will be of immense help.
>
i am using CDH 5.2.1,
Any pointers will be of immense help.
Thanks
On Fri, May 15, 2015 at 9:43 AM, amit kumar wrote:
> Hi,
>
> After re-create my account in Hue, i receives “User matching query does
> not exist” when attempting to perform hive query.
>
> The query is succeed in hive comma
Hi,
After re-create my account in Hue, i receives “User matching query does not
exist” when attempting to perform hive query.
The query is succeed in hive command line.
Please suggest on this,
Thanks you
Amit
Mungeol,
I did check the # of mappers and that did not change between the two
queries but when I ran a count(*) query the total execution time reduced
significantly for Query1 vs Query2. Also, the amount data the query reads
does change when the where clause changes. I still can't explain why one
Hi, Appan.
you can just simply check the amount of data your query reads from the
table. or the number of the mapper for running that query.
then, you can know whether it filtering or scanning all table.
Of course, it is a lazy approach. but, you can give a try.
I think query 1 should work fine. b
I agree with you Viral. I see the same behavior as well. We are on Hive
0.13 for the cluster where I'm testing this.
On Thu, May 14, 2015 at 2:16 PM, Viral Bajaria
wrote:
> Hi Appan,
>
> In my experience I have seen that Query 2 does not use partition pruning
> because it's not a straight up fil
Hi Appan,
In my experience I have seen that Query 2 does not use partition pruning
because it's not a straight up filtering and involves using functions (aka
UDFs).
What version of Hive are you using ?
Thanks,
Viral
On Thu, May 14, 2015 at 1:48 PM, Appan Thirumaligai
wrote:
> Hi,
>
> I have
Still no effect. Set minsize to 32M and maxsize to 64M
On Thu, May 14, 2015 at 11:07 AM, Ankit Bhatnagar
wrote:
> try these
> mapred.max.split.size=
> mapred.min.split.size=
>
> mapreduce.input.fileinputformat.split.maxsize=
> mapreduce.input.fileinputformat.split.minsize=
>
>
>
>
>
> On Thurs
Hi,
I have a question on Hive Optimizer. I have a table with partition columns
eg.,Sales partitioned by year, month, day. Assume that I have two years
worth of data on this table. I'm running two queries on this table.
Query 1: Select * from Sales where year=2015 and month = 5 and day between
1
Yeah. 0.13 isn't compatible with 1.0 HBase. We haven't made the jump the
HBase 1.0 yet. But Hive 1.1 is on HBase 0.98. And from what I know, there
aren't many breaking changes from 0.98 to 1.0 so you might give that a shot
a see if it works.
On Thu, May 14, 2015 at 3:30 PM, Ibrar Ahmed wrote:
>
I have also tried
ADD FILE /usr/local/hbase/conf/hbase-site.xml;
ADD JAR /usr/local/hive/lib/zookeeper-3.4.5.jar;
ADD JAR /usr/local/hive/lib/hive-hbase-handler-0.13.0.jar;
ADD JAR /usr/local/hive/lib/guava-11.0.2.jar;
ADD JAR /usr/local/hbase/lib/hbase-client-1.0.1.jar;
ADD JAR /usr/local/hbase/l
Hive : 0.13
Hbase: 1.0.1
On Fri, May 15, 2015 at 1:26 AM, kulkarni.swar...@gmail.com <
kulkarni.swar...@gmail.com> wrote:
> Hi Ibrar,
>
> It seems like your hive and hbase versions are incompatible. What version
> of hive and hbase are you on?
>
> On Thu, May 14, 2015 at 3:21 PM, Ibrar Ahmed
> w
Hi Ibrar,
It seems like your hive and hbase versions are incompatible. What version
of hive and hbase are you on?
On Thu, May 14, 2015 at 3:21 PM, Ibrar Ahmed wrote:
> Hi,
>
> While creating a table in Hive I am getting this error message.
>
> CREATE TABLE abcd(key int, value string) STORED BY
Hi,
While creating a table in Hive I am getting this error message.
CREATE TABLE abcd(key int, value string) STORED BY
'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES
("hbase.columns.mapping" = ":key,cf1:val") TBLPROPERTIES ("hbase.table.name"
= "xyz");
[Hive Error]: Q
try thesemapred.max.split.size= mapred.min.split.size=
mapreduce.input.fileinputformat.split.maxsize=
mapreduce.input.fileinputformat.split.minsize=
On Thursday, May 14, 2015 11:04 AM, Pradeep Gollakota
wrote:
The following property has been to no effect.
mapreduce.input.filei
Hi, experts,
My application uses a CTAS query to create a result table in hive, the
source table has deeply nested struct column (7 levels). CTAS query fails
with the following exception.
jdbc:hive2://localhost:1/default> CREATE TABLE IF NOT EXISTS
reporting.test1 AS select row_number() over(
The following property has been to no effect.
mapreduce.input.fileinputformat.split.maxsize = 67108864
I'm still getting 1 Mapper per file.
On Thu, May 14, 2015 at 10:27 AM, Ankit Bhatnagar
wrote:
> you can explicitly set the split size
>
>
>
> On Wednesday, May 13, 2015 11:37 PM, Pradeep Go
you can explicitly set the split size
On Wednesday, May 13, 2015 11:37 PM, Pradeep Gollakota
wrote:
Hi All,
I'm writing an MR job to read data using HCatInputFormat... however, the job is
generating too many splits. I don't have this problem when running queries in
Hive since it c
Ok, I think I understand now. I also get why OrcSplit.getPath returns
just up to the partition keys and not the delta directories. In most
cases there will be more than one delta directory, so which one would it
pick?
It seems you already know the file type you are working on before you
cal
Hi Hive Users,
I'm using Cloudera distribution and Hive's 13th version on my cluster.
I came across a problem where job is not making any progress after writing
log line - "*Number of reduce tasks is set to 0 since there's no reduce
operator*"
Below is the log for the same, could you help me wha
Now my hbase is working fine now, but i am still getting the same error
[127.0.0.1:1] hive> CREATE TABLE hbase_table_1(key int, value string)
> STORED BY 'org.apache.hadoop.hive.hbase.
HBaseStorageHandler'
> WITH SERDEPROPERTIES ("hbase.columns.mapp
23 matches
Mail list logo