Understood, thanks Evyatar.
On Mon, Nov 7, 2022, 17:42 Evy M wrote:
> TBH I'm not sure why there is an issue casting the int to BigInt and I'm
> also not sure about the Jira ticket, I hope someone else can help here.
> Regarding the solution - IMO the more correct solution here would be to
> mod
Hi Evyatar,
Yes, directly reading the parquet data works. Since we use hive metastore
to obfuscate the underlying datastore details, we want to avoid directly
accessing the files.
I guess then the only option is to either change the data or change the
schema of the hive metastore as you suggested r
TBH I'm not sure why there is an issue casting the int to BigInt and I'm
also not sure about the Jira ticket, I hope someone else can help here.
Regarding the solution - IMO the more correct solution here would be to
modify the Hive table to use INT since it seems that there is no need to
use BigIn
Hi Naresh,
Have you tried any of the following in order to resolve your issue:
1. Reading the Parquet files (directly, not via Hive [i.e,
spark.read.parquet()]), casting to LongType and creating the hive
table based on this dataframe? Hive's BigInt and Spark's Long should have
the sam