You do not need to place it in every local directory of every node. Just use
hadoop fs -put to put it on HDFS. Alternatively as others suggested use s3
> On 28 Feb 2017, at 02:18, Yunjie Ji wrote:
>
> After start the dfs, yarn and spark, I run these code under the root
> directory of spark on m
Or place the file in s3 and provide the s3 path
Kr
On 28 Feb 2017 1:18 am, "Yunjie Ji" wrote:
> After start the dfs, yarn and spark, I run these code under the root
> directory of spark on my master host:
> `MASTER=yarn ./bin/run-example ml.LogisticRegressionExample
> data/mllib/sample_libsvm_d
Have you tried specifying an absolute instead of a relative path ?
Femi
> On Feb 27, 2017, at 8:18 PM, Yunjie Ji wrote:
>
> After start the dfs, yarn and spark, I run these code under the root
> directory of spark on my master host:
> `MASTER=yarn ./bin/run-example ml.LogisticRegressionExamp