Hello Everyone,

We are using postgres for hive persistent store.

We are making use of the schematool to create hive schema and our hive
configs have table and column validation enabled.

While trying to create a simple hive table we ran into the following error.

Error: Error while processing statement: FAILED: Execution Error, return
code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
MetaException(message:javax.jdo.JDODataStoreException: Wrong precision for
column "*COLUMNS_V2"."COMMENT*" : was 4000 (according to the JDBC driver)
but should be 256 (based on the MetaData definition for field
org.apache.hadoop.hive.metastore.model.MFieldSchema.comment).

Looks like the Hive Metastore validation expects it to be 255 but when I
looked at the metastore script for Postgres  it creates the column with
precision 4000.

Interesting thing is that mysql scripts for the same hive version create
the column with precision 255.

Is there a config to communicate with Hive MetaStore validation layers as
to what is the appropriate column precision to be based on the underlying
persistent store  used or
is this a known workaround to turn of validation when using postgress as
the persistent store.

Thanks,
Siddhi

Reply via email to