IMHO - this is a bad idea esp in failure scenarios.

How about creating a subfolder each for the jobs?

On Sat, 17 Jul 2021 at 9:11 am, Eric Beabes <mailinglist...@gmail.com>
wrote:

> We've two (or more) jobs that write data into the same directory via a
> Dataframe.save method. We need to be able to figure out which job wrote
> which file. Maybe provide a 'prefix' to the file names. I was wondering if
> there's any 'option' that allows us to do this. Googling didn't come up
> with any solution so thought of asking the Spark experts on this mailing
> list.
>
> Thanks in advance.
>
-- 
Best Regards,
Ayan Guha

Reply via email to