C:\Users\AppData\Local\Continuum\Anaconda2\python.exe
C:/workspacecode/pyspark/pyspark/churn/test.py
Traceback (most recent call last):
  File "C:/workspacecode/pyspark/pyspark/churn/test.py", line 5, in <module>
    conf = SparkConf()
  File
"C:\spark-2.0.0-bin-hadoop2.6\spark-2.0.0-bin-hadoop2.6\spark-2.0.0-bin-hadoop2.6\python\pyspark\conf.py",
line 104, in __init__
    SparkContext._ensure_initialized()
  File
"C:\spark-2.0.0-bin-hadoop2.6\spark-2.0.0-bin-hadoop2.6\spark-2.0.0-bin-hadoop2.6\python\pyspark\context.py",
line 243, in _ensure_initialized
    SparkContext._gateway = gateway or launch_gateway()
  File
"C:\spark-2.0.0-bin-hadoop2.6\spark-2.0.0-bin-hadoop2.6\spark-2.0.0-bin-hadoop2.6\python\pyspark\java_gateway.py",
line 79, in launch_gateway
    proc = Popen(command, stdin=PIPE, env=env)
  File "C:\Users\AppData\Local\Continuum\Anaconda2\lib\subprocess.py", line
711, in __init__
    errread, errwrite)
  File "C:\Users\AppData\Local\Continuum\Anaconda2\lib\subprocess.py", line
959, in _execute_child
    startupinfo)
WindowsError: [Error 2] Le fichier sp�cifi� est introuvable

Process finished with exit code 1

2016-08-04 16:01 GMT+02:00 pseudo oduesp <pseudo20...@gmail.com>:

> hi ,
> with pyspark 2.0  i get this errors
>
> WindowsError: [Error 2] The system cannot find the file specified
>
> someone can help me to find solution
> thanks
>
>

Reply via email to