Import sqlContext.implicits._ before using df ()
Sent from Samsung Mobile.
Original message From: satyajit vegesna
Date:19/03/2016 06:00 (GMT+05:30)
To: u...@spark.apache.org, dev@spark.apache.org Cc:
Subject: Fwd: DF creation
Hi ,
I am trying to create separate val
Are you using Yarn to run spark jobs only ?. Are you configuring spark
properties in spark-submit parameters? . If so
did you try with --no - of - executors x*53 (where x is no of nodes )
--spark executor-memory 1g --spark-driver-memory 1g.
You might see yarn allocating