Hi,

I am using spark sql to create/alter hive tables. I have a highly nested
json and I am using the schemRDD to infer the schema. The json has 6
columns and 1 of the column (which is a struct) has around 60 fields (key
value pairs).
When I run the spark sql query for the above table, it just hangs up
without any DEBUG/ERROR logs. Also if I remove a few columns or fields from
the column having 60 fields, it works fine.

Is there a limit on the size of a query which is preventing me from running
the spark sql query.
The original query works perfectly if I use the hive client.

Any help is appreciated.

Thanks,
Udit

Reply via email to