Hello users,

                 Thanks Bhavesh, well as Bhavesh said, I completely agree.
For that i need to parse the file, extract the file line-by-line and
execute it.
If $bin/hive -f '/path/to/query/file' can execute a entire file without any
overhead(manual parsing etc), there should be some way of doing the same
through a JDBC application. Can I have some try on this.
Thanks.

On Tue, Apr 17, 2012 at 4:40 PM, Bhavesh Shah <bhavesh25s...@gmail.com>wrote:

> Hi Chandan,
> Execute your queries which are in your file like JDBC operations.
> Like:
> sql="insert overwrite table ABC select * from TblA";
> stmt.executeUpdate(sql);
>
> Like wise you can do it for all your queries. It will execute one by one
> and you will get the result.
>
>
>
> On Tue, Apr 17, 2012 at 4:32 PM, Chandan B.K <chan...@zinniasystems.com>wrote:
>
>> Hi,
>>
>> Scenario: I have started my Hadoop processes (dfs and mapred).  I have
>> also started by Hive thrift server using the command:
>> $bin/hive --service hiveserver.
>>
>> I want to execute a query file {nothing but a text file with a list of
>> queries separated by new line character} through my JDBC application. How
>> do I do it?
>>
>> Note: I am aware that through the command line, i can run this statement
>> to execute the queries in the query file:
>> $bin/hive -e /path/to/query/file
>> I am not looking for this one
>>
>>
>> Thanks.
>> --
>>
>> -Regards
>> Chandan.B.K.,
>>
>>
>>
>> ==================================================================================
>> Cell: +91-9902382263
>> Alt Email: bkchandan...@yahoo.com
>>
>>
>
>
> --
> Regards,
> Bhavesh Shah
>
>


-- 

-Regards
Chandan.B.K.,


==================================================================================
Cell: +91-9902382263
Alt Email: bkchandan...@yahoo.com

Reply via email to