I am trying to use the databricks csv reader and have tried multiple ways
to get this package available to pyspark. I have modified both
spark-defaults.conf and zeppelin-env.sh (as stated below). I've included
the spark-interpreter log from Zeppelin which seems to show it adding the
jar properly.   Funny thing is, running pyspark at the command line works
properly. I will say this, I am running Zeppelin (and thus Spark) in
Docker, however, to ensure I did proper troubleshooting, I connected to the
docker container (that was throwing the error in Zeppelin) and ran pyspark
from within the container and it worked fine. The error only exists in
Zeppelin.

I would welcome any assistance.

John



*Error in Zeppelin:*
Py4JJavaError: An error occurred while calling o82.load.
: java.lang.ClassNotFoundException: Failed to find data source:
com.databricks.spark.csv. Please find packages at http://spark-packages.org
at
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:77)
at
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:102)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException:
com.databricks.spark.csv.DefaultSource
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at
org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$4$$anonfun$apply$1.apply(ResolvedDataSource.scala:62)
at
org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$4$$anonfun$apply$1.apply(ResolvedDataSource.scala:62)
at scala.util.Try$.apply(Try.scala:161)
at
org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$4.apply(ResolvedDataSource.scala:62)
at
org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$4.apply(ResolvedDataSource.scala:62)
at scala.util.Try.orElse(Try.scala:82)
at
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:62)
... 14 more
(<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error occurred
while calling o82.load.\n', JavaObject id=o83), <traceback object at
0x7f3776b36320>)


*zeppelin-env.sh*

export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.10:1.2.0"


*spark-defaults.conf*

spark.jars.packages             com.databricks:spark-csv_2.10:1.2.0
*Command I am running:*

df = sqlContext.read.format('com.databricks.spark.csv').option('header',
'true').option('inferschema', 'true').option('mode',
'DROPMALFORMED').load('/user/test/airline/2016_ONTIME.csv')



*spark interpreter log:*

INFO [2016-04-17 11:45:59,335] ({pool-2-thread-2}
Logging.scala[logInfo]:58) - Added JAR
file:/home/test/.ivy2/jars/com.databricks_spark-csv_2.10-1.2.0.jar at
http://192.168.0.95:59483/jars/com.databricks_spark-csv_2.10-1.2.0.jar with
timestamp 1460893559334

 INFO [2016-04-17 11:45:59,335] ({pool-2-thread-2}
Logging.scala[logInfo]:58) - Added JAR
file:/home/test/.ivy2/jars/org.apache.commons_commons-csv-1.1.jar at
http://192.168.0.95:59483/jars/org.apache.commons_commons-csv-1.1.jar with
timestamp 1460893559335

 INFO [2016-04-17 11:45:59,336] ({pool-2-thread-2}
Logging.scala[logInfo]:58) - Added JAR
file:/home/test/.ivy2/jars/com.univocity_univocity-parsers-1.5.1.jar at
http://192.168.0.95:59483/jars/com.univocity_univocity-parsers-1.5.1.jar
with timestamp 1460893559336

 INFO [2016-04-17 11:45:59,348] ({pool-2-thread-2}
Logging.scala[logInfo]:58) - Added JAR
file:/zeppelin/interpreter/spark/zeppelin-spark-0.6.0-incubating-SNAPSHOT.jar
at
http://192.168.0.95:59483/jars/zeppelin-spark-0.6.0-incubating-SNAPSHOT.jar
with timestamp 1460893559348

 INFO [2016-04-17 11:45:59,470] ({pool-2-thread-2}
Logging.scala[logInfo]:58) - Created default pool default, schedulingMode:
FIFO, minShare: 0, weight: 1

 INFO [2016-04-17 11:45:59,551] ({Thread-38} Logging.scala[logInfo]:58) -
Registered as framework ID e996d06e-4a8b-4647-9d07-02a7517c1453-0025

 INFO [2016-04-17 11:45:59,556] ({pool-2-thread-2}
Logging.scala[logInfo]:58) - Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 37373.

Reply via email to