Hi ,
We are allowing users to write/use their own UDF in our Hive environment.When
they create the function against the db, then all the users that can use the db
will see(or use) the udf.
I would ask how UDF authentication is done, can UDF be granted to some specific
users,so that other users
Can someone help answer the questions? Thanks
--
发自我的网易邮箱平板适配版
在 2016-01-28 22:11:29,Todd 写道:
Hi,
I am using Hive 0.14, and I am using JDBC to connect the Hive thrift server to
do queries things, I encounter two issues-
1. When the query is issued,how can i get the job id(mapreduce
Hi,
I am using Hive 0.14, and I am using JDBC to connect the Hive thrift server to
do queries things, I encounter two issues-
1. When the query is issued,how can i get the job id(mapreduce that run the
query),so that I can get a chance to be able to kill the job.
2. I want to execute a sql file
r issue exactly as at some point I faced the
same, but I hope this might help.
Cheers
On 11 Jan 2016, at 11:39, Todd wrote:
Thank you, Sofia.
From the log, looks it is java.lang.AbstractMethodError that leads to the job
failure.
I am using Hive 1.2.1+Spark 1.5.2, is this a compa
ror
hive --hiveconf hive.root.logger=INFO,console
There is a good chance that you are encountering problems between the Hive and
Spark versions and installation.
See
https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started
On 11 Jan 2016, at 08:47, Todd wrote:
Hive,
I am trying out the Hive on Spark with hive 1.2.1 and spark 1.5.2. Could
someone help me on this? Thanks!
Following are my steps:
1. build spark 1.5.2 without Hive and Hive Thrift Server. At this point, I can
use it to submit application using spark-submit --master yarn-client
2. And t
Hi,
Could someone help on this question?
I have a parquet file, I need to first figure out its schema before I create
table and run query against it,. I know Spark SQL can do this, but I would ask
whether Hive supports this in some way,
Thanks!
在 2016-01-09 11:19:34,"Todd" 写
Hi,
I would ask whether hive(1.2.1) support to automatically detect parquet schema.
Thanks.
Hi,
I have Hadoop (2.6.0, pseudo distributed mode) and Hive (1.2.1) installed on my
local machine. I have a table A,its underlying file takes up 8 HDFS blocks.
When I run a query like
select count(1) from A
From the result, I see only 1 mapper task ,I thought it should be equal to the
block num
Hi,
I would explore whether hive on spark is stable enough to adopt it in our
production environment.
As a starting point, is there some documentation for me to get started?
Thanks.
I have figured out that hive supports this.
在 2015-12-04 09:58:48,"Todd" 写道:
Could someone help answer my question? Thanks.
At 2015-12-03 19:12:29, "Todd" wrote:
Hi,
I am using Hive 0.14.0, and have hive thrift server running.During its running,
I would use
Could someone help answer my question? Thanks.
At 2015-12-03 19:12:29, "Todd" wrote:
Hi,
I am using Hive 0.14.0, and have hive thrift server running.During its running,
I would use “create function” to add a permanent function,
Does hive support this **without restarting** the h
Hi,
I am using Hive 0.14.0, and have hive thrift server running.During its running,
I would use “create function” to add a permanent function,
Does hive support this **without restarting** the hive thrift server,that is,
after creating the function, I will be able to use the function when I conne
figured out how to do it. :/
Best Regards,
Todd Wilson
Senior Technical Consultant
Coffing Data Warehousing
(513) 292-3158
<http://www.coffingdw.com/> www.CoffingDW.com
The information contained in this communication is confide
rying
this hive_system table and doing some string functions in .NET to get it,
but if there is something like select version that would be great.
Thank you for listening.
Best Regards,
Todd
ply here?
I see a TBL_Type column in the ERwin diagram under the TBLS table so I
thought this information might be kept here (if supported).
Thank you very much for your help!!!
Best Regards,
Todd Wilson
Senior Technical Consultant
Coffing Data Warehousing
(513) 292
?
> Sorry ,the adaptive heartbeat code is not in this github code, we are
> discussing it.
>
>
>
> On Fri, Feb 17, 2012 at 11:00 AM, Anty wrote:
>>
>> Hi: Todd
>>
>> yes, the rewritten shuffle in actual a backport of the shuffle from MR2 .
>> We m
Hey Schubert,
Looking at the code on github, it looks like your rewritten shuffle is
in fact just a backport of the shuffle from MR2. I didn't look closely
- are there any distinguishing factors?
Also, the OOB heartbeat and adaptive heartbeat code seems to be the
same as what's in 1.0?
There's also a config dfs.supergroup - users in the supergroup act as
superusers with regard to HDFS permissoins.
-Todd
On Fri, Oct 29, 2010 at 12:10 AM, Pavan wrote:
> IMHO, there is no straight forward way of doing this in Hadoop except that
> you need to install Hadoop compone
19 matches
Mail list logo