Well I know that the script works fine for Oracle (both base and
transactional).

Ok this is what this table is in Oracle. That column is 256 bytes.

[image: Inline images 2]


HTH

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 9 June 2016 at 19:43, Siddhi Mehta <sm26...@gmail.com> wrote:

> Hello Everyone,
>
> We are using postgres for hive persistent store.
>
> We are making use of the schematool to create hive schema and our hive
> configs have table and column validation enabled.
>
> While trying to create a simple hive table we ran into the following error.
>
> Error: Error while processing statement: FAILED: Execution Error, return
> code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
> MetaException(message:javax.jdo.JDODataStoreException: Wrong precision
> for column "*COLUMNS_V2"."COMMENT*" : was 4000 (according to the JDBC
> driver) but should be 256 (based on the MetaData definition for field
> org.apache.hadoop.hive.metastore.model.MFieldSchema.comment).
>
> Looks like the Hive Metastore validation expects it to be 255 but when I
> looked at the metastore script for Postgres  it creates the column with
> precision 4000.
>
> Interesting thing is that mysql scripts for the same hive version create
> the column with precision 255.
>
> Is there a config to communicate with Hive MetaStore validation layers as
> to what is the appropriate column precision to be based on the underlying
> persistent store  used or
> is this a known workaround to turn of validation when using postgress as
> the persistent store.
>
> Thanks,
> Siddhi
>

Reply via email to