zjffdu commented on pull request #4097:
URL: https://github.com/apache/zeppelin/pull/4097#issuecomment-826297779


   @Reamer `spark.archives` works, but it is only available after spark 3.1, 
and I think it is better to put conda env in cloud storage and then specify it 
via `spark.archives` (because we spark-submit runs in pod so we can not specify 
local file as `spark.archives`. Do you think whether it work for users ?
   
   And I think this PR is orthogonal to `spark.archives`, because 
`spark.archives` will control both the python environment of driver and 
executor. The executor is out of control of zeppelin.  That's why this approach 
here only works for python interpreter which zeppelin can control its python 
environment. But of course we should make the configuration between spark and 
python interpreter as similar as possible. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to