Yes Alex,
sc.version tells me String = 1.3.1
So it is 1.3 and I used the 1.3 syntax from spark-csv website then there is
a class not found error.

NoClassDefFoundError: org/apache/commons/csv/CSVFormat

I guess the spark-csv packages are not loading properly.

I used %dep z.load("com.databricks:spark-csv_2.10:1.0.3")
%dep z.load("/Users/george/Downloads/spark-csv_2.11-1.0.3.jar")
alternatively and none of them worked. I also put the jar path in
ZEPPELIN_JAVA_OPTS environment variable in zeppelin-env.sh


On Tue, Jun 23, 2015 at 8:32 PM, Alexander Bezzubov <abezzu...@nflabs.com>
wrote:

> Hi George,
>
> does spark version that you use on the cluster matches the one zeppelin
> is build with
> <http://zeppelin.incubator.apache.org/docs/install/install.html>?
> API that you use was introduced only in spark 1.4 and it is not the default
> version yet <https://github.com/apache/incubator-zeppelin/pull/99>
>
> You can check by running simple scala paragraph with `sc.version`
>
> --
> Alex
>
> On Wed, Jun 24, 2015 at 11:51 AM, George Koshy <gkos...@gmail.com> wrote:
>
>> Please help,
>> I get this error
>> error: value read is not a member of org.apache.spark.sql.SQLContext
>> val df =
>> sqlContext.read.format("com.databricks.spark.csv").option("header",
>> "true").load("filename.csv")
>>
>>
>> My code is as follows:
>> import org.apache.spark.SparkContext
>>
>> %dep
>> com.databricks:spark-csv_2.11:1.0.3
>>
>> import org.apache.spark.sql.SQLContext val sqlContext = new
>> SQLContext(sc) val df =
>> sqlContext.read.format("com.databricks.spark.csv").option("header",
>> "true").load("fileName.csv")
>>
>> --
>> Sincerely!
>> George Koshy,
>> Richardson,
>> in.linkedin.com/in/gkoshyk/
>>
>
>
>
> --
> --
> Kind regards,
> Alexander.
>
>


-- 
Sincerely!
George Koshy,
Richardson,
in.linkedin.com/in/gkoshyk/

Reply via email to