Hi Jone,
um. i can say for sure something is wrong. :)
i would _start_ by going to the tasktracker. this is your friend. find
your job and look for failed reducers. That's the starting point anyway,
IMHO.
On Fri, Feb 21, 2014 at 11:35 AM, Jone Lura wrote:
> Hi,
>
> I have tried some variat
most interesting. we had an issue recently with querying a table with 15K
columns and running out of heap storage but not 15K partitions.
15K partitions shouldn't be causing a problem in my humble estimation.
Maybe a million but not 15K. :)
So is there a traceback we can look at? or its not heap
Hi folks,
We are running CDH 4.3.0 Hive (0.10.0+121) with a MySQL metastore.
In Hive, we have an external table backed by HDFS which has a 3-level
partitioning scheme that currently has 15000+ partitions.
Within the last day or so, queries against this table have started failing.
A simple query
Hi,
I have tried some variations of queries with aggregation function such as the
following query;
select max(total) from my_table;
and
select id, sum(total) from my_table group by id
In my junit tests, I only have two rows with data, but the queries are
extremely slow.
The job detail outp
Sure thing. Thanks Biswajit.
Thanks,
Shouvanik
From: Biswajit Nayak [mailto:biswajit.na...@inmobi.com]
Sent: Friday, February 21, 2014 9:23 AM
To: user@hive.apache.org
Subject: RE: Is there any monitoring tool available for hiveserver2
This one is for monitoring of metastore only. It has t
This one is for monitoring of metastore only. It has to be chaged for
server and monitoring.
Biswa
On 21 Feb 2014 22:44, wrote:
> Thanks Biswajit.
>
>
>
> I will try it and let you know.
>
>
>
>
>
>
>
> Thanks,
>
> Shouvanik
>
>
>
> *From:* Biswajit Nayak [mailto:biswajit.na...@inmobi.com]
> *S
Thanks Biswajit.
I will try it and let you know.
Thanks,
Shouvanik
From: Biswajit Nayak [mailto:biswajit.na...@inmobi.com]
Sent: Friday, February 21, 2014 2:28 AM
To: user@hive.apache.org
Subject: Re: Is there any monitoring tool available for hiveserver2
Below is the script that does my grap
Line 316 in my UDTF where is shows the error is the line where I call forward().
The whole trace is :
Caused by: java.lang.RuntimeException: cannot find field key from [0:_col0,
1:_col2, 2:_col6, 3:_col7, 4:_col8, 5:_col9]
at
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.ge
What is your stracktrace? Can you paste here?
It is maybe a different bug.
If you put e.f3 <> null at an outsider query? Does that work?
Or maybe you have to enhance your UDTF to push that filter into your UDTF. It
is not perfect, but maybe a solution for you as now.
You can create a new Jira if i
Hi,
I have a UDTF which works fine except when I do a query like the
following :
select e.* from mytable LATERAL VIEW myfunc(p1,p2,p3,p4) e as f1,f2,f3,f4
where lang=123and e.f3 <> null;
The error I see is:
java.lang.RuntimeException: cannot find field key from [0:_col0, 1:_col2,
2:_c
Hello,
For Hive add this in hive-site.xml in all hive's node
hive.querylog.location
/local_path
logs de requetes hive pour la redirection de logs
hive
For Hadoop (job mapred) add this in 'hadoop-env.sh' in all hadoop's node
export HADOOP_LOG_DIR=/local_path
Hope this help you ;)
Matouk
Hi,
I have encountered a problem when I try to deploy my application on Jetty.
In my Junit test environment everything works fine, whilst deploying the same
application in Jetty I receive the following message;
java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration
at
org
Hi All,
I've just started my adventure with Hive so I'm not sure if it's an
issue here or just my misunderstanding...
I'm using Hortonworks Sandbox 2.0 (Hive 0.12.0.2.0.6.0-76)
I'm following hortonworks spring-xd tutorial and the last step is to
create a table as a select of two views (all oth
Hi Hive Users ,
I have used below 2 variables to override the location of log files. Default
location is /tmp
* hiveconf hive.log.dir
* hiveconf hive.querylog.location=
It works fine, but when I have a map-join in the sql , it will still go to /tmp
instead of the value i provided. Ple
Below is the script that does my graphing of heap (usage|allocated) for
hive.
#!/bin/bash
HIVES_PID=`jps -mlv |grep "org.apache.hadoop.hive.metastore.HiveMetaStore"
|awk '{print $1}'`
if [ $HIVES_PID ]; then
jmap -heap $HIVES_PID |awk '
BEGIN{ gmetric="/usr/bin/gmetric";sum=0}
{
split($1,a
great
On 21 Feb 2014 15:36, "Jone Lura" wrote:
> Thank you!
>
> By adding the ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' to the
> create statement it finally worked. I also changed the delimiter in the
> a.txt file to match statement.
>
> In addition I had to delete the ship_type created usin
Thank you!
By adding the ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t’ to the create
statement it finally worked. I also changed the delimiter in the a.txt file to
match statement.
In addition I had to delete the ship_type created using the Hive CLI and create
the table through the JDBC drive
can you just cat the file a.txt as well.
You may have to create table as
"create table ship_type(id int, name string) ROW FORMAT DELIMITED FIELDS
TERMINATED BY '\t';
If it is tab separated or use proper field separator you have.
You get incorrect results when your table definition does not match
I used this from the example;
stmt.execute("create table " + tableName + " (key int, value string)”);
In my application it is very similar;
stmt.execute("create table ship_type (id int, name string)”);
On 21 Feb 2014, at 10:27, Nitin Pawar wrote:
> can you share your create table statement ?
can you share your create table statement ?
On Fri, Feb 21, 2014 at 2:55 PM, Jone Lura wrote:
> Hi,
>
> I am new with Hadoop and Hive, and I am trying to figure out what is =
> going wrong.
>
> In my application I connect successfully to the Hive and I am able to =
> load data into it.
>
> When
Hi,
I am new with Hadoop and Hive, and I am trying to figure out what is =
going wrong.
In my application I connect successfully to the Hive and I am able to =
load data into it.
When I try to run a select statement however, things are not as I =
expected.
The select query returns the correct n
21 matches
Mail list logo