Hello.
I have a process (python) that reads a kafka queue, for each record it checks
in a table.
# Load table in memory
table=sqlContext.sql("select id from table")
table.cache()
kafkaTopic.foreachRDD(processForeach)
def processForeach (time, rdd):
print(time)
for k in rdd.collect ():
if (t
,
Gourav
On Wed, Aug 9, 2017 at 10:41 AM, toletum wrote:
Thanks Matteo
I fixed it
Regards,
JCS
On MiƩ., Ago. 9, 2017 at 11:22, Matteo Cossu wrote:
Hello,
try to use these options when starting Spark:
--conf "spark.driver.userClassPathFirst=true" --conf
"spark.executor.userClassPathFi
the executor and the driver of Spark will use
the classpath you define.
Best Regards,
Matteo Cossu
On 5 August 2017 at 23:04, toletum wrote:
Hi everybody
I'm trying to connect Spark to Hive.
Hive uses Derby Server for metastore_db.
$SPARK_HOME/conf/hive-site.xml
javax.jdo.option.Connectio
Hi everybody
I'm trying to connect Spark to Hive.
Hive uses Derby Server for metastore_db.
$SPARK_HOME/conf/hive-site.xml
javax.jdo.option.ConnectionURL
jdbc:derby://derby:1527/metastore_db;create=true
JDBC connect string for a JDBC metastore
javax.jdo.option.ConnectionDriverName
org.ap