Github user fszabo2 commented on a diff in the pull request:

    https://github.com/apache/sqoop/pull/60#discussion_r238251774
  
    --- Diff: src/java/org/apache/sqoop/hive/HiveTypes.java ---
    @@ -37,16 +42,28 @@
       private static final String HIVE_TYPE_STRING = "STRING";
       private static final String HIVE_TYPE_BOOLEAN = "BOOLEAN";
       private static final String HIVE_TYPE_BINARY = "BINARY";
    +  private static final String HIVE_TYPE_DECIMAL = "DECIMAL";
     
       public static final Log LOG = 
LogFactory.getLog(HiveTypes.class.getName());
     
       private HiveTypes() { }
     
    +
    +  public static String toHiveType(int sqlType, SqoopOptions options) {
    +
    +    if 
(options.getConf().getBoolean(ConfigurationConstants.PROP_ENABLE_PARQUET_LOGICAL_TYPE_DECIMAL,
 false)
    +        && (sqlType == Types.NUMERIC || sqlType == Types.DECIMAL)){
    +      return HIVE_TYPE_DECIMAL;
    +    }
    +    return toHiveType(sqlType);
    +  }
    +
    +
       /**
        * Given JDBC SQL types coming from another database, what is the best
        * mapping to a Hive-specific type?
        */
    -  public static String toHiveType(int sqlType) {
    +  private static String toHiveType(int sqlType) {
    --- End diff --
    
    After taking another look at this file, it turns out that this method is 
only called textfile import. Since my only intended use case is parquetfile, I 
reverted this part of the change.
    
    The implementation will be throwing a RuntimeException (for mapping error) 
and log warnings in case of a parquet files, as you've suggested.


---

Reply via email to