> I guess I see different things. Having used all the tech. In particular for
> large hive queries I see OOM simply SCANNING THE INPUT of a data directory,
> after 20 seconds!
If you've got an LLAP deployment you're not happy with - this list is the right
place to air your grievances. I usual
"You're off by a couple of orders of magnitude - in fact, that was my last
year's Hadoop Summit demo, 10 terabytes of Text on S3, converted to ORC +
LLAP."
"We've got sub-second SQL execution, sub-second compiles, sub-second
submissions … with all of it adding up to a single or double digit second
> It is not that simple. The average Hadoop user has years 6-7 of data. They do
> not have a "magic" convert everything button. They also have legacy processes
> that don't/can't be converted.
…
> They do not want the "fastest format" they want "the fastest hive for their
> data".
I've yet to
"Yes, it's a tautology - if you cared about performance, you'd use ORC,
because ORC is the fastest format."
It is not that simple. The average Hadoop user has years 6-7 of data. They
do not have a "magic" convert everything button. They also have legacy
processes that don't/can't be converted. The
Kaustubh, there is not much to do without you suppling a way to reproduce the
issue and/or the relevant logs.
Dudu
From: Kaustubh Deshpande [mailto:kaustubh.deshpa...@exadatum.com]
Sent: Friday, June 23, 2017 10:29 AM
To: user@hive.apache.org; dev-subscr...@hive.apache.org
Subject: Failed to cr
Hi,
-
I am facing issue in Apache Hive v0.13.0 for creating table.
-
I am executing hive script in which i have DROP and CREATE TABLE
statements.
-
CREATE TABLE statement is type of CREATE TABLE db_nm.tble_nm AS SELECT *
from db_nm.other_tbl.
-
db_nm.tble_nm is