So with --packages to spark-shell and spark-submit Spark will automatically
fetch the requirements from maven. If you want to use an explicit local jar
you can do that with the --jars syntax. You might find
http://spark.apache.org/docs/latest/submitting-applications.html useful.

On Fri, Feb 19, 2016 at 7:26 AM, Ashok Kumar <ashok34...@yahoo.com.invalid>
wrote:

> Hi,
>
> I downloaded the zipped csv libraries from databricks/spark-csv
> <https://github.com/databricks/spark-csv>
>
>
> [image: image] <https://github.com/databricks/spark-csv>
>
>
>
>
>
> databricks/spark-csv <https://github.com/databricks/spark-csv>
> spark-csv - CSV data source for Spark SQL and DataFrames
> View on github.com <https://github.com/databricks/spark-csv>
> Preview by Yahoo
>
>
> Now I have a directory created called spark-csv-master.  I would like to
> use this in spark-shell with ---packgage like below
>
> $SPARK_HOME/bin/spark-shell --packages com.databricks:spark-csv_2.11:1.3.0
>
> Do I need to use mvn to create a zipped file to start. or may be added to
> spark CLASSPATH. What is needful here please to make it work
>
> thanks
>



-- 
Cell : 425-233-8271
Twitter: https://twitter.com/holdenkarau

Reply via email to