I have the following Spark SQL query:

SELECT column_name, * from table_name;

I have multiple spark clusters, this query has been running daily on all of
the clusters.  After a recent zeppelin redeployment, it fails on just one
of the clusters with the following exception:
AnalysisException: cannot resolve '`column_name`' given input columns

The columns listed after the exception does include the "column_name".  The
data and schema is the same on all clusters.  The query does seem to work
using beeline and has been only failing when executed through zeppelin.  I
should point out there are several zeppelin instances that are able to
execute this query, only one that fails, which is why I reached out to the
zeppelin community.  Also, I reached out to Spark users but was
unfortunately unable to get any help.   I'd appreciate any insight you can
provide.

Reply via email to