Hi Hive Users,
I'm using cloudera's hive 0.13 version which by default provide Kryo plan
serialization format.
hive.plan.serialization.format
*kryo*
As i'm facing issues with Kryo, can anyone help me identify the other open
options in place of Kryo for hive plan serialization format.
I know o
Please mention partition as well, while loading data into a partitioned
table.
On Fri, May 1, 2015 at 8:22 PM, Sean Busbey wrote:
> -user@hadoop to bcc
>
> Kumar,
>
> I'm copying your question over to the Apache Hive user list (
> user@hive.apache.org). Please keep your questions about using Hiv
-user@hadoop to bcc
Kumar,
I'm copying your question over to the Apache Hive user list (
user@hive.apache.org). Please keep your questions about using Hive there.
The Hadoop user list (u...@hadoop.apache.org) is just for that project.
On Fri, May 1, 2015 at 9:32 AM, Asit Parija
wrote:
> Hi Kum
Resolved by delete all files under /tmp hdfs://namenode:/tmp/hive
delete from mysql-->hive-->funcs
recreate all the function & it work
On Thu, Apr 30, 2015 at 3:36 PM, Gerald-G wrote:
> HI
> My hive version is 0.14.0 installed from HDP2.2.4
>
> On Thu, Apr 30, 2015 at 3:34 PM, Gerald-G wrot
Yes and no :-) We're initially using OrcFile.createReader to create a
Reader so that we can obtain the schema (StructTypeInfo) from the file. I
don't believe this is possible with OrcInputFormat.getReader(?):
Reader orcReader = OrcFile.createReader(path, OrcFile.readerOptions(conf));
ObjectInspec