-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/59446/#review178683
-----------------------------------------------------------




ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
Lines 463 (patched)
<https://reviews.apache.org/r/59446/#comment252832>

    What's the difference with REPLACE_CANNOT_DROP_COLUMNS ? Seems they are use 
for the same error message.



ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
Lines 464 (patched)
<https://reviews.apache.org/r/59446/#comment252831>

    Should we use CASCADE in capital case for better info that it is a keyword?


- Sergio Pena


On June 19, 2017, 9:52 a.m., Barna Zsombor Klara wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/59446/
> -----------------------------------------------------------
> 
> (Updated June 19, 2017, 9:52 a.m.)
> 
> 
> Review request for hive and Sergio Pena.
> 
> 
> Bugs: HIVE-16559
>     https://issues.apache.org/jira/browse/HIVE-16559
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> HIVE-16559: Parquet schema evolution for partitioned tables may break if 
> table and partition serdes differ
> 
> 
> Diffs
> -----
> 
>   ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java 
> 6651900e79a5c3d4ad8329afbe3894544ce9f46e 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 
> 87928ee930b5ee974d5e4144a584773a243f8d6f 
>   ql/src/test/queries/clientnegative/parquet_alter_part_table_drop_columns.q 
> PRE-CREATION 
>   
> ql/src/test/results/clientnegative/parquet_alter_part_table_drop_columns.q.out
>  PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/59446/diff/2/
> 
> 
> Testing
> -------
> 
> Added a negative qtest. Manually tested that no regression is caused for avro 
> and textfile SerDes when columns are added or replaced in a partitioned table.
> 
> 
> Thanks,
> 
> Barna Zsombor Klara
> 
>

Reply via email to