Try putting the csv in the same path in all the nodes or in a mount point
path which is accessible by all the nodes

Regards,
Meethu Mathew


On Wed, May 10, 2017 at 3:36 PM, Sofiane Cherchalli <sofian...@gmail.com>
wrote:

> Yes, I already tested with spark-shell and pyspark , with the same result.
>
> Can't I use Linux filesystem to read CSV, such as file:///data/file.csv.
> My understanding is that the job is sent and is interpreted in the worker,
> isn't it?
>
> Thanks.
>
> El El mar, 9 may 2017 a las 20:23, Jongyoul Lee <jongy...@gmail.com>
> escribió:
>
>> Could you test if it works with spark-shell?
>>
>> On Sun, May 7, 2017 at 5:22 PM, Sofiane Cherchalli <sofian...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> I have a standalone cluster, one master and one worker, running in
>>> separate nodes. Zeppelin is running is in a separate node too in client
>>> mode.
>>>
>>> When I run a notebook that reads a CSV file located in the worker
>>> node with Spark-CSV package, Zeppelin tries to read the CSV locally and
>>> fails because the CVS is in the worker node and not in Zeppelin node.
>>>
>>> Is this the expected behavior?
>>>
>>> Thanks.
>>>
>>
>>
>>
>> --
>> 이종열, Jongyoul Lee, 李宗烈
>> http://madeng.net
>>
>

Reply via email to