Please review https://github.com/apache/spark/pull/49654
> Your best bet is to make relative path in driver to be resolved to absolute
> path and pass over to executor with that resolved path.
Right, this is exactly what I was going to implement and how it is done for
DataFrameWriter in the Dat
Thanks for the explanation.
Regards
Asif
On Tue, Jan 28, 2025 at 10:00 AM Herman van Hovell
wrote:
> There are many factors:
>
>- Typically it is a race between multiple PRs, where they all pass CI
>without the other changes, and get merged at the same time.
>- Differences between
I am genuinely curious to know, as to how do those commits which are
reliably failing the build, end up in master ? Is there some window of race
where two conflicting PRs in terms of logic ,tend to mess up the final
state in master ?
I have seen in past few months, while synching up my open PRs, f
There are many factors:
- Typically it is a race between multiple PRs, where they all pass CI
without the other changes, and get merged at the same time.
- Differences between (the nightly job and the PR job) environments
(e.g. size of the machine) can also cause these issues.
- In
If you use vulnerable code in your application, sure, you might be exposed
to its vulnerability. That's a problem for the application rather than
Spark.
Here I am asking if you know of a reason this CVE affects Spark usage,
because you're asking about mitigating it. I'm first establishing whether
Hi Sean,
Just to dig in deeper with the message, totally agreed: this Hive vulnerability
is not directly affecting Spark Usage.
However, we are seeing the package is affected as long as , a
dependency/package/jar is being packaged in a product.
This is part of the processes strongly being advi