Thank you for your reply, Jeff

"%sh" ?
"sh" seems like request something execution code.
I tried "%sh", then

%sh <program name with full path>
      %sh bash: <program name>: no permission

I made binary file from .py to .pyc, but the answer was as same.
I am sorry seems like doubting you, but Is "%sh" the resolution?

-Keiji

2017-10-03 17:35 GMT+09:00 Jianfeng (Jeff) Zhang <jzh...@hortonworks.com>:

>
> I am surprised why would you use %spark-submit, there’s no document about
> %spark-submit.   If you want to use spark-submit in zeppelin, then you
> could use %sh
>
>
> Best Regard,
> Jeff Zhang
>
>
> From: 小野圭二 <onoke...@gmail.com>
> Reply-To: "users@zeppelin.apache.org" <users@zeppelin.apache.org>
> Date: Tuesday, October 3, 2017 at 12:49 PM
> To: "users@zeppelin.apache.org" <users@zeppelin.apache.org>
> Subject: How to execute spark-submit on Note
>
> Hi all,
>
> I searched this topic on the archive of ml, but still could not find out
> the solution clearly.
> So i have tried to post this again(maybe).
>
> I am using ver 0.8.0, and have installed spark 2.2 on the other path, just
> for checking my test program.
> Then i wrote a quite simple sample python code to check the how to.
>
> 1. the code works fine on a note in Zeppelin
> 2. the same code but added the initialize code for SparkContext in it
> works fine on the Spark by using 'spark-submit'.
> 3. tried to execute "2" from a note in Zeppelin with the following script.
>     yes, "spark" interpreter has been implemented in the note.
>     then on the note,
>         %spark-submit <program name with full path>
>           -> interpreter not found error
> 4.I have arranged 'SPARK_SUBMIT_OPTIONS' in zeppelin-env.sh order by the
> doc
>     ex. export SPARK_SUBMIT_OPTIONS='--packages
> com.databricks:spark-csv_2.10:1.2.0'
> 5. then running....
>      %spark-submit <program name with full path>
>       -> interpreter not found error  (as same as "3")
>
> How can i use spark-submit from a note?
> Any advice thanks.
>
> -Keiji
>

Reply via email to