(Level.OFF)Logger.getLogger("akka").setLevel(Level.OFF)
Thanks
--
Thanks
Muhammad Ahsan
va:354)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
Thanks in advance.
--
Thanks
Muhammad Ahsan
Hi
saveAsTextFile is a member of RDD where as
fields.map(_.mkString("|")).mkString("\n") is a string. You have to
transform it into RDD using something like sc.parallel(...) before
saveAsTextFile.
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/E
Hi
It worked for me like this. Just define the case class outside of any class
to write to parquet format successfully. I am using Spark version 1.1.1.
case class person(id: Int, name: String, fathername: String, officeid: Int)
object Program {
def main (args: Array[String]) {
val co
--
Code
--
scala> import org.apache.spark.SparkContext._
import org.apache.spark.SparkContext._
scala> import org.apache.spark.rdd.RDD
import org.apache.spark.rdd.RDD
scala> import org.apache.spark.sql.SchemaR