HeartSaVioR commented on PR #49654:
URL: https://github.com/apache/spark/pull/49654#issuecomment-2628952869

   I'd like to make this be super clear what scenario(s) make us struggle 
without this fix and how this fix will help resolving it.
   
   > The use case mostly applies to single node local Spark cluster.
   
   I don't think it is very common scenario that people installs Spark in 
multiple directories and runs driver and executor in separate directory (or any 
way to set different working directory). Using relative path which resolves to 
different directories per process doesn't seem like a common scenario and I'd 
like to see the detail.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to