[ 
https://issues.apache.org/jira/browse/SPARK-50118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-50118:
---------------------------------
    Fix Version/s:     (was: 4.0.0)

> Spark removes working directory while Python UDF runs
> -----------------------------------------------------
>
>                 Key: SPARK-50118
>                 URL: https://issues.apache.org/jira/browse/SPARK-50118
>             Project: Spark
>          Issue Type: Bug
>          Components: Connect, PySpark
>    Affects Versions: 3.5.2
>            Reporter: Peter Andrew
>            Assignee: Hyukjin Kwon
>            Priority: Major
>              Labels: pull-request-available
>
> With Spark Connect + PySpark, we can stage files using `spark.addArtifacts`. 
> When a Python UDF is executed, the working directory is set to a folder with 
> the corresponding artifacts available.
> I have observed on large scale jobs with long running tasks (>45 mins) that 
> Spark sometimes removes that working directory, even though UDF tasks are 
> still running. This can be seen by periodically running `os.getcwd()` in the 
> UDF, which raises `FileNotFoundError`.
> This seems to coincide with log records indicating 'Session evicted: <uuid>`, 
> from [`isolatedSessionCache`.|#L212] There is a 30 minute timeout here that 
> might be to blame.
> I have not yet been able to write a simple program to reproduce. I suspect 
> that there might be a conjunction of multiple events, such as when a task is 
> scheduled on an executor 30 mins after the last task started. 
> https://issues.apache.org/jira/browse/SPARK-44290 might be relevant.
> cc [~gurwls223] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to