Hi Team, I wanted to understand how Hive on Spark actually maps to Spark jobs underneath triggered by Hive.
AFAIK each Hive query would trigger a new Spark job. But this was contradicted by someone and wanted to confirm what is the real design implementation. Please let me know if there is reference/design doc which explains this or if someone knows about this can answer here. Thanks, Ninad