I'm not sure this is applied consistently across Spark, but I'm dealing
with another change now where an unqualified path is assumed to be a local
file. The method Utils.resolvePath implements this logic and is used
several places. Therefore I think this is probably intended behavior and
you can write hdfs:///tmp if you mean to reference /tmp on HDFS.

On Wed, Oct 12, 2016 at 7:55 PM Koert Kuipers <ko...@tresata.com> wrote:

> i see this warning when running jobs on cluster:
>
> 2016-10-12 14:46:47 WARN spark.SparkContext: Spark is not running in local
> mode, therefore the checkpoint directory must not be on the local
> filesystem. Directory '/tmp' appears to be on the local filesystem.
>
> however the checkpoint "directory" that it warns about is a hadoop path. i
> use an unqualified path, which means a path on the default filesystem by
> hadoop convention. when running on the cluster my default filesystem is
> hdfs (and it correctly uses hdfs).
>
> how about if we change the method that does this check
> (Utils.nonLocalPaths) to be aware of the default filesystem instead of
> incorrectly assuming its local if not specified?
>
>

Reply via email to