GitHub user wangmiao1981 opened a pull request:

    https://github.com/apache/spark/pull/13163

    [SPARK-15360][Spark-Submit]Should print spark-submit usage when no 
arguments is specified

    ## What changes were proposed in this pull request?
    
    (Please fill in changes proposed in this fix)
    In 2.0, ./bin/spark-submit doesn't print out usage, but it raises an 
exception.
    In this PR, an exception handling is added in the Main.java when the 
exception is thrown. In the handling code, if there is no additional argument, 
it prints out usage.
    
    ## How was this patch tested?
    
    (Please explain how this patch was tested. E.g. unit tests, integration 
tests, manual tests)
    Manually tested.
    ./bin/spark-submit 
    Usage: spark-submit [options] <app jar | python file> [app arguments]
    Usage: spark-submit --kill [submission ID] --master [spark://...]
    Usage: spark-submit --status [submission ID] --master [spark://...]
    Usage: spark-submit run-example [options] example-class [example args]
    
    Options:
      --master MASTER_URL         spark://host:port, mesos://host:port, yarn, 
or local.
      --deploy-mode DEPLOY_MODE   Whether to launch the driver program locally 
("client") or
                                  on one of the worker machines inside the 
cluster ("cluster")
                                  (Default: client).
      --class CLASS_NAME          Your application's main class (for Java / 
Scala apps).
      --name NAME                 A name of your application.
      --jars JARS                 Comma-separated list of local jars to include 
on the driver
                                  and executor classpaths.
      --packages                  Comma-separated list of maven coordinates of 
jars to include
                                  on the driver and executor classpaths. Will 
search the local
                                  maven repo, then maven central and any 
additional remote
                                  repositories given by --repositories. The 
format for the
                                  coordinates should be 
groupId:artifactId:version.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/wangmiao1981/spark submit

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/13163.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #13163
    
----
commit 548f3b7fec55089709bb7fbed8819d44a3107af7
Author: [email protected] <[email protected]>
Date:   2016-05-17T23:03:33Z

    add print usage function

commit ce0d2c07c0128410dea5276ec7fb47fb9c9ee1e6
Author: [email protected] <[email protected]>
Date:   2016-05-18T07:01:12Z

    remove a blank line

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to