65904 [uber-SubtaskRunner] INFO org.apache.hadoop.hive.ql.exec.Task -
Execution completed successfully
65904 [uber-SubtaskRunner] INFO org.apache.hadoop.hive.ql.exec.Task -
MapredLocal task succeeded
65904 [uber-SubtaskRunner] INFO
org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask - Execu
Hi users,
The following drop column syntax does not work.
> alter table test_db.test_table drop column col_1;
FAILED: ParseException line 1:41 mismatched input 'column' expecting
PARTITION near 'drop' in drop partition statement
According to Hive manual, REPLACE COLUMNS can be used to drop colum
Have you looked at Apache Falcon?
On Jan 8, 2016 2:41 AM, "Elliot West" wrote:
> Further investigation appears to show this going wrong in a copy phase of
> the plan. The correctly functioning HDFS → HDFS import copy stage looks
> like this:
>
> STAGE PLANS:
> Stage: Stage-1
> Copy
>
Try this https://github.com/dbist/workshops/tree/master/hive/JSON
On Jan 14, 2016 12:59 PM, "sri sowj" wrote:
> Hi All,
>
> I am trying to execute hive commands on json file using
> jsonserde's,but I am always getting null values ,but not actual data.
> I have used serde's provided in
> "code.goo
Hi,
That is a valid question. However, in my two cents, I will move away from
Hortonworks and others and consider a solution that can provide High
availability (HA) (not to be confused with continuous availability) for both
the Hive metastore and Hive server2. Depending on your Service Leve
Hi,
I am trying to setup Hortonworks Data Platform. I would want to setup Hive
in high availability mode (both metastore and as well as HiveServer2).
Along with that, Hortonworks recommendation is to backup the RDBMS behind
Hive service.
Can anyone please let me know what is the best practice aro