Your query says "JOIN supplier s ON (s.supplierid=v.supplier)" but
s.supplierid should be s.supplier_id.
Also, the vender schema shows a "quantiry" column which might be just a
message typo, but if you cut-&-pasted the schema data into the message then
you should change the name to "quantity".
–
Hi Bejoy,
I made some changes as per your suggetion.
Here is the error from the
http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201207251858_0004 Job:
Error: java.lang.ClassNotFoundException:
org.apache.zookeeper.KeeperException
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.s
The ctor is used in TestHBaseSerDe.java
So maybe change it to package private ?
On Wed, Jul 25, 2012 at 12:43 PM, kulkarni.swar...@gmail.com <
kulkarni.swar...@gmail.com> wrote:
> While going through some code for HBase/Hive Integration, I came across
> this constructor:
>
> public HBaseSerDe()
While going through some code for HBase/Hive Integration, I came across
this constructor:
public HBaseSerDe() throws SerDeException {
}
Basically, the constructor is doing nothing but throwing an exception.
Problem is fixing this now will be a non-passive change.
I couldn't really find an obvio
Gee thanks! That is great service.
Chuck
From: Bejoy Ks [mailto:bejoy...@yahoo.com]
Sent: Wednesday, July 25, 2012 12:04 PM
To: user@hive.apache.org
Subject: Re: Problem replacing existing Hive file with modified copy
The corresponding jira filed to track this bug is 'HIVE-3300' .
https://issue
And now I have to apologize. I was one version late for hive (0.8.1). And
the version 0.9 does include HWI with bootstrap. The jira must be
misleading or I don't understand what the issue is about...
Bertrand
https://issues.apache.org/jira/browse/HIVE-2910
On Wed, Jul 25, 2012 at 6:15 PM, Bertra
Hi
It is because of space issues. Issue 'df -h' command on the TT node that
reported this error, the partition used for dfs.data.dir should be full.
Regards
Bejoy KS
From: abhiTowson cal
To: user@hive.apache.org
Sent: Wednesday, July 25, 2012 9:48 PM
Subjec
Great answer. Thanks a lot.
1) I understand the concern with branches but I quickly reviewed the change
for 0.9.1 and not everything seemed to be a bug patch.
So I thought : why not ask about HIVE-2910.
2) I wasn't sure about that, it seems logical though. That's a great news.
I will definitely t
The corresponding jira filed to track this bug is 'HIVE-3300' .
https://issues.apache.org/jira/browse/HIVE-3300
Regards
Bejoy KS
From: Bejoy Ks
To: "user@hive.apache.org"
Sent: Wednesday, July 25, 2012 9:28 PM
Subject: Re: Problem replacing existing Hive fi
Hi Connell
It looks like a bug in hive, I checked with hive 0.9 . If you are loading data
from local fs to hive tables using 'LOAD DATA LOCAL INPATH' and if a file with
the same name exists in the table's location then the new file will be suffixed
by *_copy_1.
But if we do the 'LOAD DATA IN
Generally we only apply patches to trunk. Thus maintaining branches
becomes to much trouble for us. You have to remember that most hive
major versions have no actual major changes. Most everything is hidden
behind a query language. The only changes that have to be done
carefully are changes to the
Hi,
Here is my stand. Hive provides a dsl to easily explore data contained in
hadoop with limited experience with java and MapReduce.
And Hive Web Interface provides an easy exposition : users need only a
browser and the hadoop cluster can be well 'fire-walled' because the
communication is only th
I created a Hive table that consists of two files, names1.txt and names2.txt.
The table works correctly and answers all queries etc.
I want to REPLACE names2.txt with a modified version. I copied the new version
of names2.txt to the /tmp/input folder within HDFS. Then I tried the command:
hive
Hi Users,
I have 3 table's vender,supplier and date, by using these table Im trying
to generate a report like below
*Vendor Name, Supplier Name, Year, Quarter, Sum ( quantity )*
I have executed the below query, after execute the query,I'm not getting
any result on my console
hive>select v.ve
I recall recently reading somewhere on cloudera's web site that it was
not recommended to run more than one thrift server connecting to hive
however it's been a couple months since reading this. I'm still digging
to find the article and was curious perhaps someone here can provide
some insight
Hi Prabhu,
Be careful when going into the direction of calendar dimensions. While strictly
speaking this is a cleaner dwh design you will for sure run into issues you
might not expect. Consider this is probably what you would want to do (roughly)
to query a day:
select count(*)
from fact f
j
Hi Anson
If you have your external table point to a directory that has files compressed
using lzo, everything would work as desired if you have lzo codec listed in
io.compression.codecs in core-site.xml .
Regards
Bejoy KS
From: Anson Abraham
To: user@hive.ap
With the release of cdh4, does is Lzo compression still supported, where if
I have my hive table point to path of files in lzo?
-anson
Hi Vijay
You have provided the hbase master directly. (It is fine for single node
hbase installation). But still can you try providing the zookeeper quorum
instead.
If that doesn't work as well , please post the error log from the mapreduce
tasks?
Just go the jobtracker page and drill down o
Can you also post logs from "/tmp//hive.log". That might contain some
info on your job failure.
On Wed, Jul 25, 2012 at 8:28 AM, vijay shinde wrote:
> Hi Bejoy,
>
> Thanks for quick reply. Here are some additional details
>
> Cloudera Version - CDH3U4
>
> *hive-site.xml*
> **
> *
> hive.aux.jars.
Hi Bejoy,
Thanks for quick reply. Here are some additional details
Cloudera Version - CDH3U4
*hive-site.xml*
**
*
hive.aux.jars.path
file:///usr/lib/hive/lib/hive-hbase-handler-0.7.1-cdh3u2.jar,file:///usr/lib/hive/lib/hbase-0.90.4-cdh3u2.jar,file:///usr/lib/hive/lib/zookeeper-3.3.1.jar,file:///
Thanks for your help :)
it's data has been loaded fine now,
select * from dim_date;
76622020-12-22 00:00:00.000 20204 12 3 52
13 4 357 83 22 3 DecemberDec
Tuesday Tue
76632020-12-23 00:00:00.000 20204 12 3 5
Hi Prabhu
Your data is tab delimited use /t as the delimiter while creating table.
fields terminated by '/t'
Not sure this is the right / or not. If this doesn't work try the other one.
Regards
Bejoy KS
Sent from handheld, please excuse typos.
-Original Message-
From: prabhu k
Date:
Thanks for the reply.
I have tried the with delimited fields terminated by '|' and delimited
fields terminated by ',' while selecting the table both Im getting null .
when i see the HDFS file looks like below.
bin/hadoop fs -cat /user/hive/warehoure/time.txt
7666 2020-12-26 00:00:00.000202
Bertrand,
Sorry, I don't have a link to msck documentation. I haven't tried it myself,
I just heard of it.
Thanks
From: Bertrand Dechoux [mailto:decho...@gmail.com]
Sent: Wednesday, July 25, 2012 1:23 PM
To: user@hive.apache.org
Subject: Re: Continuous log analysis requires 'dynamic' p
What Bejoy is saying implicitly, is that the format is not verified by the
load command. If it does not match, you will get NULL.
And it would be curious that your comma separated value (csv) file is using
pipe (|) but why not.
Bertrand
On Wed, Jul 25, 2012 at 12:45 PM, Bejoy KS wrote:
> **
> H
Hi Prabhu
Can you cat the file in hdfs and ensure that the fields are delimited by '|'
character.
hadoop fs -text user/hive/warehouse/dim_date/time.csv
Regards
Bejoy KS
Sent from handheld, please excuse typos.
-Original Message-
From: prabhu k
Date: Wed, 25 Jul 2012 16:05:42
To:
R
Hi Users,
I have created dim_date table like below. table created successfully and i
then load the data into the dim_date table
while i am selecting the table, getting null values.my input file is
time.csv file
hive> create table dim_date(DateId int,ddate string,Year int,Quarter
int,Month_Numbe
Hi Vijay
Can you share more details like
The CDH Version/Hive version you are using
Steps you followed for hive hbase integration with the values you set
The DDL used for hive hbase integration
The actual error from failed map reduce task
Regards
Bejoy KS
Sent from handheld, please excus
usage of msck :
msck table
msck repair table
BUT that won't help me.
I am using an external table with 'external' partitions (which do not
follow hive conventions).
So I first create an external table without local and then I specify every
partition with an absolute location.
I don't think ther
@Puneet Khatod : I found that out. And that's why I am asking here. I guess non
AWS users might have the same problems and a way to solve it.
@Ruslan Al-fakikh : It seems great. Is there any documentation for msck? I
will find out with the diff file but is there a wiki page or a blog post
about it
I am facing issue while executing HIVE queries on HBASE-HIVE integration.
I followed the wiki hbase-hive integration
https://cwiki.apache.org/Hive/hbaseintegration.html
I have already passed all the required jars for auxpath in hive-site.xml
file.
I am using Cloudera CDH demo VM.. Any help would b
32 matches
Mail list logo