I don't think Spark optimizer supports something like statement cache where
plan is cached and bind variables (like RDBMS) are used for different
values, thus saving the parsing.
What you re stating is that the source and tempTable change but the plan
itself remains the same. I have not seen this
drop user@spark and keep only dev@
This is something great to figure out, if you have time. Two things that
would be great to try:
1. See how this works on Spark 2.0.
2. If it is slow, try the following:
org.apache.spark.sql.catalyst.rules.RuleExecutor.resetTime()
// run your query
org.apache
Which version are you using here? If the underlying files change,
technically we should go through optimization again.
Perhaps the real "fix" is to figure out why is logical plan creation so
slow for 700 columns.
On Thu, Jun 30, 2016 at 1:58 PM, Darshan Singh
wrote:
> Is there a way I can use