I am trying to execute hive query using spark 1.5.1 in standalone mode and
hive 1.2.0 jdbc version.
Here is my piece of code:
private static final String HIVE_DRIVER =
"org.apache.hive.jdbc.HiveDriver"; private static final String
HIVE_CONNECTION_URL = "jdbc:hive2://localhost:1/idw"; private
I am using hive jdbc 1.0 in my java application to create connection with
hive server and execute query. I want to set the idle hive connection
timeout from java code. Like say, user first creates the hive connection,
and if the hive connection remains idle for next 10 minutes, then this
connection
I am using hive jdbc 1.0 in my java application to create connection with
hive server and execute query. I want to set the idle hive connection
timeout from java code. Like say, user first creates the hive connection,
and if the hive connection remains idle for next 10 minutes, then this
connection
Apart from the above configuration, In order to allow insert/update/delete
operation, You would need to change following configuration in conf/hive-
site.xml
hive.in.test
true
Hope this helps.
On Tue, Sep 8, 2015 at 12:28 PM, Zhu, Jian-bing
wrote:
> Hi,
>
>
>
> I tried to test UPDATE an
to first?
Thanks,
Reena Upadhyay
Hi Jason,
Thanks for the reply :) Map reduce jobs were not able to find the third
party jars, this was causing the exceptions. I have placed all the
dependent third party jar's inside haddop_home/lib.
By doing so, the issue is solved.
Thanks,
Reena Upadhyay
On Fri, Oct 3, 2014 at 1:16 AM,
Hi,
I have a single argument udf that takes the sql string as a argument.
Inside the udf, I have used hive jdbc client to execute the query. My query
execution part inside udf is working fine and also I am able to get the
desired result from the query. My udf is returning a String. I am getting
Hi,
I want to use custom writable data type written in hadoop as a column data
type for some table in hive. Any idea How to register the custom data type
of hadoop in hive , so that it can be used as a column data type.
Thanks,
Reena Upadhyay
radata first, using views. Those are my thoughts,
> there could be other tricks.
>
>
> On Mon, Aug 25, 2014 at 9:26 PM, reena upadhyay
> wrote:
>
>> Hi,
>>
>> As long as the data type is ANSI complaint, its equivalent type is
>> available in Hive. But there ar
t VARCHAR,CHAR string DECIMAL(x,0) bigint
>
>
> I would typically stage data in hadoop as all string and then move it to
> hive managed/orc with the above mapping.
>
>
>
>
> On Mon, Aug 25, 2014 at 8:42 PM, reena upadhyay
> wrote:
>
>> Hi,
>>
>>
Hi,
Is there any way to create custom user defined data type in Hive? I want to
move some table data from teradata database to Hive. But in teradata
database tables, there are few columns data type that are not supported in
Hive. So to map the source table columns to my destination table columns i
11 matches
Mail list logo