Sent from Rocket Mail via Android
I think these directories belong to task tracker temporary storage. I am not
very confident to conclude that go ahead with your clean up. So, wait for
similar or an expert's response 
Sent from HTC via Rocket! excuse typo.
Austin,
There are some of the great questions asked simply in your email.
Datawarehouse and hadoop echo system goes hand-on-hand. I don't think you need
to move all data from your warehouse to hive and hbase. This is the key :) you
need to understand where should you use have and where can you
ad command to load these files, for example
load data local inpath 'path-to-csv-file.gz' into table my_table_zip;
hope this helps
Keshav C Savant
From: Manish Bhoge [mailto:manishbh...@rocketmail.com]
Sent: Wednesday, September 26, 2012 9:43 PM
To: user@hive.apache.org<mailto:user
Hi Sadu,
See my answer below.
Also this will help you to understand in detail about collection, MAP and Array.
http://datumengineering.wordpress.com/2012/09/27/agility-in-hive-map-array-score-for-hive/
From: Sadananda Hegde [mailto:saduhe...@gmail.com]
Sent: Friday, September 28, 2012 10:31 AM
files, for example
load data local inpath 'path-to-csv-file.gz' into table my_table_zip;
hope this helps
Keshav C Savant
From: Manish Bhoge [mailto:manishbh...@rocketmail.com]
Sent: Wednesday, September 26, 2012 9:43 PM
To: user@hive.apache.org
Subject: Re: zip file or tar file cosumption
Hi,
We have been using data from Hive on Tableau. You need to have JDBC connection.
I don't remember the exaction menu in Tableau. But having your metadata in
MySQL and having JDBC connection on top of MySQL will allow you to access the
data from Hive. If you have CDH3 then make sure you don't
'/home/manish/zipfile';
OR
If you already have external table pointing to a certain location you can load
this zip file into your table as
LOAD DATA INPATH '/home/manish/zipfile' INTO TABLE manish_test;
Hope this helps.
Richin
From: ext Manish Bhoge [mailto:manishbh...@r
t to understand that would it be possible to utilize zip/tar files
directly into Hive. All the files has similar schema (structure). Say 50 *.txt
files are zipped into a single zip file can we load data directly from this zip
file OR should we need to unzip first?
Thanks & Regards
Mani
As I mention you can copy jar to your hadoop cluster at /usr/lib/hive/lib and
then us directly in Hiveql.
Thank You,
Manish.
Sent from my BlackBerry, pls excuse typo
-Original Message-
From: Manu A
Date: Wed, 26 Sep 2012 15:01:14
To:
Reply-To: user@hive.apache.org
Subject: Re: Custom
& Regards
Manish Bhoge | Technical Architect * Target DW/BI| * +919379850010 (M) Ext:
5691 VOIP: 22165 | * "Excellence is not a skill, It is an attitude."
MySite<http://mysites.target.com/personal/z063783>
Sorry for late reply.
For anything which you want to run as MAP and REDUCE you have to extend map
reduce classes for your functionality irrespective of language (Java, python or
any other). Once you have extended class move the jar to the Hadoop cluster.
Bertrand has also mention about reflecti
Manu,
If you have written UDF in Java for Hive then you need to copy your JAR on your
Hadoop cluster in /usr/lib/hive/lib/ folder to hive to use this JAR.
Thank You,
Manish
From: Manu A [mailto:hadoophi...@gmail.com]
Sent: Tuesday, September 25, 2012 3:44 PM
To: user@hive.apache.org
Subject: Cu
Sarath,
Is this the external table where you have ran the query? How did you loaded the
table? Because it looks like the error is about the file related to table than
CDH Jar.
Thank You,
Manish
From: Sarath [mailto:sarathchandra.jos...@algofusiontech.com]
Sent: Tuesday, September 25, 2012 3:4
No Problem. I had something different in mind where I wanted to split this
complete string into different columns to simplify the queries, like
ASP.NET_SessionId, Rviewd, UserId, UserType, LastLogin
Now let me try with your approach. I have seen this DDL in hive tutorial but
wasn't sure whether
Thanks Bejoy, So you mean to say in the below scenario we have to have both
collection and map together? Do I need to define Array and MAP together for the
same column? As I understand from your mail this column has not only MAP but
collection of Maps. Is this assumption is right?
Thank You,
Ma
Robin,
I think this is type casting issue with date column. Can you send the table
definition and sample data then it would be easy to figure out where it is
giving datatype issue?
My suggestion is to create a staging table first where you dump the data
considering all the column as STRING and
When i am creating table from comma delimited csv file then i am getting below
error:
Failed to open file '/user/manish/minfo.csv': Could not read block
Block(genStamp=1094, blockId=6807603852292275080L, numBytes=44429,
token='AA', startOffset=0, path=u'/user/manish/minfo.csv',
nodes=[Da
khoa,
When you run HiveQL with filter condition then it uses reducer otherwise it
just uses map tasks to select your data. There is a issue with your reducer.
Sent from my BlackBerry, pls excuse typo
-Original Message-
From: "Nguyen, Khoa"
Date: Tue, 27 Mar 2012 20:55:24
To: user@hive
Whenever you submit a Sql a job I'd get generated. You can open the job tracker
localhost:50030/jobtracker.asp
It shows jobs are running and rest of the other details.
Thanks,
Manish
Sent from my BlackBerry, pls excuse typo
-Original Message-
From: Felix.徐
Date: Tue, 20 Mar 2012 12:58:53
Whenever you submit a Sql a job I'd get generated. You can open the job tracker
localhost:50030/jobtracker.asp
It shows jobs are running and rest of the other details.
Thanks,
Manish
Sent from my BlackBerry, pls excuse typo
-Original Message-
From: Felix.徐
Date: Tue, 20 Mar 2012 12:58:53
Shiv,
Both Hbase and Hive alone is not perfect fit for Datawarehouse application. We
have to use Hive and Hbase to feed into traditional datawarehouse application.
Hbase: Hbase is more update oriented. Insert / Update / Delete operation is
efficient in Hbase. But it doesn't perform well as Hive
This command can just tell about the data node and block details.
My question was not specific to particular table. But I was not able to run any
Hive command from command however same SQL can run from WEB UI.
Yes, I am using 2 different users, 'Guest' user from Web and 'admin' user from
comma
Thanks Mark,
I do have an access on metastore as I am login as admin. However when I query
through Hue web interface it is working for me (using User: Guest).
Here is how my hive-site.xml looks like. Is there any property that I need to
extend here.
javax.jdo.option.ConnectionURL
jdbc
Hi,
I am trying to create a table in Hive using below DDL:
CREATE TABLE pokes (foo INT, bar STRING);
I am getting below error, I have logged in as admin user :
FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Cannot get a
connection, pool error Could not create a validated obje
25 matches
Mail list logo