Yeah, I read that page before, but it does not mention the options should
come before the application jar. Actually, if I put the --class option
before the application jar, I will get  ClassNotFoundException.

Anyway, thanks again Sandy.

On Tue, May 19, 2015 at 11:06 AM, Sandy Ryza <sandy.r...@cloudera.com>
wrote:

> Awesome!
>
> It's documented here:
> https://spark.apache.org/docs/latest/submitting-applications.html
>
> -Sandy
>
> On Mon, May 18, 2015 at 8:03 PM, xiaohe lan <zombiexco...@gmail.com>
> wrote:
>
>> Hi Sandy,
>>
>> Thanks for your information. Yes, spark-submit --master yarn
>> --num-executors 5 --executor-cores 4
>> target/scala-2.10/simple-project_2.10-1.0.jar --class scala.SimpleApp is
>> working awesomely. Is there any documentations pointing to this ?
>>
>> Thanks,
>> Xiaohe
>>
>> On Tue, May 19, 2015 at 12:07 AM, Sandy Ryza <sandy.r...@cloudera.com>
>> wrote:
>>
>>> Hi Xiaohe,
>>>
>>> The all Spark options must go before the jar or they won't take effect.
>>>
>>> -Sandy
>>>
>>> On Sun, May 17, 2015 at 8:59 AM, xiaohe lan <zombiexco...@gmail.com>
>>> wrote:
>>>
>>>> Sorry, them both are assigned task actually.
>>>>
>>>> Aggregated Metrics by Executor
>>>> Executor IDAddressTask TimeTotal TasksFailed TasksSucceeded TasksInput
>>>> Size / RecordsShuffle Write Size / RecordsShuffle Spill (Memory)Shuffle
>>>> Spill (Disk)1host1:61841.7 min505640.0 MB / 12318400382.3 MB / 
>>>> 121007701630.4
>>>> MB295.4 MB2host2:620721.7 min505640.0 MB / 12014510386.0 MB / 
>>>> 109269121646.6
>>>> MB304.8 MB
>>>>
>>>> On Sun, May 17, 2015 at 11:50 PM, xiaohe lan <zombiexco...@gmail.com>
>>>> wrote:
>>>>
>>>>> bash-4.1$ ps aux | grep SparkSubmit
>>>>> xilan     1704 13.2  1.2 5275520 380244 pts/0  Sl+  08:39   0:13
>>>>> /scratch/xilan/jdk1.8.0_45/bin/java -cp
>>>>> /scratch/xilan/spark/conf:/scratch/xilan/spark/lib/spark-assembly-1.3.1-hadoop2.4.0.jar:/scratch/xilan/spark/lib/datanucleus-core-3.2.10.jar:/scratch/xilan/spark/lib/datanucleus-api-jdo-3.2.6.jar:/scratch/xilan/spark/lib/datanucleus-rdbms-3.2.9.jar:/scratch/xilan/hadoop/etc/hadoop
>>>>> -Xms512m -Xmx512m org.apache.spark.deploy.SparkSubmit --master yarn
>>>>> target/scala-2.10/simple-project_2.10-1.0.jar --class scala.SimpleApp
>>>>> --num-executors 5 --executor-cores 4
>>>>> xilan     1949  0.0  0.0 103292   800 pts/1    S+   08:40   0:00 grep
>>>>> --color SparkSubmit
>>>>>
>>>>>
>>>>> When look at the sparkui, I see the following:
>>>>> Aggregated Metrics by ExecutorExecutor IDAddressTask TimeTotal TasksFailed
>>>>> TasksSucceeded TasksShuffle Read Size / Records1host1:304836 s101127.1
>>>>> MB / 28089782host2:49970 ms00063.4 MB / 1810945
>>>>>
>>>>> So executor 2 is not even assigned a task ? Maybe I have some problems
>>>>> in my setting, but I don't know what could be the possible settings I set
>>>>> wrong or have not set.
>>>>>
>>>>>
>>>>> Thanks,
>>>>> Xiaohe
>>>>>
>>>>> On Sun, May 17, 2015 at 11:16 PM, Akhil Das <
>>>>> ak...@sigmoidanalytics.com> wrote:
>>>>>
>>>>>> Did you try --executor-cores param? While you submit the job, do a ps
>>>>>> aux | grep spark-submit and see the exact command parameters.
>>>>>>
>>>>>> Thanks
>>>>>> Best Regards
>>>>>>
>>>>>> On Sat, May 16, 2015 at 12:31 PM, xiaohe lan <zombiexco...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I have a 5 nodes yarn cluster, I used spark-submit to submit a
>>>>>>> simple app.
>>>>>>>
>>>>>>>  spark-submit --master yarn
>>>>>>> target/scala-2.10/simple-project_2.10-1.0.jar --class scala.SimpleApp
>>>>>>> --num-executors 5
>>>>>>>
>>>>>>> I have set the number of executor to 5, but from sparkui I could see
>>>>>>> only two executors and it ran very slow. What did I miss ?
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Xiaohe
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to