Hi All,

Anyone see something similar to this:

%spark
import org.apache.spark.sql._
import org.apache.phoenix.spark._

val input = "/user/root/crimes/atlanta"

val df =
sqlContext.read.format("com.databricks.spark.csv").option("header",
"true").option("DROPMALFORMED", "true").load(input)
val columns = df.columns.map(x => x.toUpperCase + " varchar,\n")
column

The result is an error:
File name too long

I tried commenting out various lines, and then ALL lines, but everything
(even in new paragraphs) passed to the interpreter results in "File name
too long".

Am I doing something silly?

Thanks,
-Randy

Reply via email to