hmmm. so beeline blew up *before* the query was even submitted to the
execution engine? one would think 16G would be plenty 8M row sql
statement.
some suggestions if you feel like going further down the rabbit hole.
1. confirm your beeline java process is indeed running with expanded
memory (
I set the heap size using HADOOP_CLIENT_OPTS all the way to 16g and still
no luck.
I tried to go down the table join route but the problem is that the
relation is not an equality so it would be a theta join which is not
supported in Hive.
Basically what I am doing is a geographic intersection agai
I mean is there any other way to execute these command automatically when
starting beeline or connecting to HS2 from java JDBC API wiht loading .hiverc
file at the same time?
在 2016-09-02 19:56:29,"Maria" 写道:
>
>Hi,all:
> I hava set "hive.users.in.admin.role=hive" in hive-site.xml, and I hi
Hi,all:
I hava set "hive.users.in.admin.role=hive" in hive-site.xml, and I hive some
"create temporary function ..." in .hiverc, when I start beeline, it occured NO
PRIVILEGE:
FAILED: HiveAccessControlException Permission denied: Principal [name=hive,
type=USER] does not have following privi
Hi Gopal,
We are using MR not Tez.
I feel since the adhoc queries data output size is something we can
determine, rather than the time the job takes, I was wondering more from
output size/number of rows quota.
Thanks,
Ravi
On Fri, Sep 2, 2016 at 2:57 AM, Gopal Vijayaraghavan
wrote:
>
> > Are t