Re: Thrift Server

2012-07-05 Thread VanHuy Pham
Mark, JDBC does not talk to hive via thrift server. I shut down the thrift server, JDBC still works. However, you might be right in the case of non-local connection, or as they call "standalone mode". Van On Thu, Jul 5, 2012 at 8:47 PM, Mark Grover wrote: > Hi Ransom, > JDBC talks to the Hive

Re: Thrift Server

2012-07-05 Thread Mark Grover
Hi Ransom, JDBC talks to the Hive driver via the Hive Thrift server. Since the thrift server is limited to single connection at a time, out of curiosity, how much benefit do concurrent connections through JDBC give you, given that they will also be handled serially when being sent to the Thrift

Re: Hive insert into

2012-07-05 Thread Mark Grover
Abhi, Bejoy is right. To add to the remark, "insert into" doesn't work on bucketed tables as of now. For non-bucketed tables, it should work if you are using Hive version 0.8 or higher. More details on this thread: http://mail-archives.apache.org/mod_mbox/hive-user/201206.mbox/%3c263112076.39498

Re: Pulling off server's debug info to Client

2012-07-05 Thread Navis류승우
I've modified hive to notify jobId for monitoring purpose. Check the issue and vote for acceptance ^^ https://issues.apache.org/jira/browse/HIVE-3235 2012/7/6 VanHuy Pham > Hi all, > I have been interested in getting the debug information and display it > to clients. > Specifically, whe

Pulling off server's debug info to Client

2012-07-05 Thread VanHuy Pham
Hi all, I have been interested in getting the debug information and display it to clients. Specifically, when you submit a job to hive cli (on terminal say you write "select * from somewhere ...) or through a thrift server, you will see they hive displays some debug information on the scree

Re: How to list all views

2012-07-05 Thread Tim Havens
There may be a way to select table_name's from the metastore...if so name your views like *_view and select for those table names. Of course if you can query the metastore there's probably an even better way where you don't have to name you view's anything special, and just search for a table type

Re: How to list all views

2012-07-05 Thread Stephen R. Scaffidi
Thank you, but what I need is to list only the views, or conversely, only the actual tables. On 07/05/2012 11:14 AM, Bejoy KS wrote: Hi Stephen You can see the views as well along with tables using the Show tables; command. --Original Message-- From: Stephen R. Scaffidi To: user@h

Re: How to list all views

2012-07-05 Thread Bejoy KS
Hi Stephen You can see the views as well along with tables using the Show tables; command. --Original Message-- From: Stephen R. Scaffidi To: user@hive.apache.org ReplyTo: user@hive.apache.org ReplyTo: sscaff...@tripadvisor.com Subject: How to list all views Sent: Jul 5, 2012 20:38 How

How to list all views

2012-07-05 Thread Stephen R. Scaffidi
How can I list all the views in hive? I can't seem to figure out how to do it with HQL.

Re: Hive uploading

2012-07-05 Thread Bejoy Ks
Hi Yogesh The verbose option didn't work there as there is no DEBUG logging, can you please add the verbose to the beginning of your sqoop command? Lemme frame a small sqoop import sample or you, Please run this command and post in the console log sqoop import --verbose --connect jdbc:mysql://

RE: Hive uploading

2012-07-05 Thread yogesh.kumar13
Hi Bejoy, I have created new table called Troy and for hive its troyhive, as it was showing Outputdirectory already exists sqoop import --connect jdbc:mysql://localhost:3306/Demo --username sqoop1 --password SQOOP1 -table troy --hive-table troyhive --create-hive-table --hive-import --hive-

Re: Hive uploading

2012-07-05 Thread Bejoy Ks
Hi Yogesh Verbose option won't create any difference in operation, but gives more logging information on console which could be helpful to search for any hints. So please post in your console dump/log along with the sqoop import command with verbose enabled. Regards Bejoy KS ___

RE: Hive uploading

2012-07-05 Thread yogesh.kumar13
Hello Bejoy, sqoop import --connect jdbc:mysql://localhost:3306/Demo --username sqoop1 --password SQOOP1 -table Dummy --hive-table dummyhive --create-hive-table --hive-import --hive-home HADOOP/hive --verbose Still the same, no table has been created. I am not able to see the dummyhive table

Re: Hive uploading

2012-07-05 Thread Bejoy Ks
Hi Yogesh Please try out this command  sqoop import --connect jdbc:mysql://localhost:3306/Demo --username sqoop1 --password SQOOP1 -table Dummy --hive-table dummyhive  --create-hive-table  --hive-import --hive-home HADOOP/hive --verbose Regards Bejoy KS

RE: Hive uploading

2012-07-05 Thread yogesh.kumar13
Hi Bejoy I have confirmed hive installation its same for both I used command echo $HIVE_HOME on both sqoop terminal and hive terminal both result the same Path HADOOP/hive I am new to Hive and sqoop, would you please give an example using -verbose option with this command sqoop import --conne

Re: Hive uploading

2012-07-05 Thread Bejoy Ks
Hi Yogesh No issues seen on the first look. Can you run the sqoop import with --verbose option and post in the console dump? Are you having multiple hive installation? If so please verify whether you are using the same hive for both SQOOP import and then for verifying data using hive cli. (the

Hive uploading

2012-07-05 Thread yogesh.kumar13
Hi I have created a table in Mysql by name Dummy and it has 2 columns, and 1 row of data I want to upload that table into Hive using Sqoop tool. I used this command sqoop import --connect jdbc:mysql://localhost:3306/Demo --username sqoop1 --password SQOOP1 -table Dummy --hive-table dummyhive

Re: hi users

2012-07-05 Thread Mohammad Tariq
Increase your replication factor as suggested by Nitin. Replication factor > 1 is a must to avoid data loss. And you can use "fsck" command to check the HDFS. Also verify whether you are able to see all the datanodes at the namenode WEBUI page. Regards, Mohammad Tariq On Thu, Jul 5, 2012 at

Re: hi users

2012-07-05 Thread shaik ahamed
Thanks for ur reply nitin .. On Thu, Jul 5, 2012 at 12:30 PM, Nitin Pawar wrote: > read for hadoop dfs fsck command > > > On Thu, Jul 5, 2012 at 12:29 PM, shaik ahamed wrote: > >> Hi Nitin, >> >> How can i check the dfs health? could u plz guide me the steps... >> >> On Thu, Jul 5, 2012 at

Re: hi users

2012-07-05 Thread Nitin Pawar
read for hadoop dfs fsck command On Thu, Jul 5, 2012 at 12:29 PM, shaik ahamed wrote: > Hi Nitin, > > How can i check the dfs health? could u plz guide me the steps... > > On Thu, Jul 5, 2012 at 12:23 PM, Nitin Pawar wrote: > >> can you check dfs health? >> >> I think few of your nodes are

Re: hi users

2012-07-05 Thread Nitin Pawar
if you have 2 nodes and replication factor is 1 then this is a problem, I would like to suggest minimum replication factor as 2 this would make sure that even if 1 node is down, data is served from replicated blocks On Thu, Jul 5, 2012 at 12:27 PM, shaik ahamed wrote: > Thanks for the reply guy