All the stuff in lib_managed are what gets downloaded by sbt/maven when you
compile. Those are necessary for running spark, spark streaming, etc. But
you should not have to add all that to classpath individually and manually
when running Spark programs. If you are trying to run your Spark program
l
Hi Tathagata,
I actually have a separate question. What's the usage of lib_managed folder
inside spark source folder ? Are those the library required for spark
streaming to run ? Do they needed to be added to spark classpath when
starting sparking cluster?
Weide
On Sat, May 3, 2014 at 7:08 PM,
Hi Tathagata,
I figured out the reason. I was adding a wrong kafka lib along side with
the version spark uses. Sorry for spamming.
Weide
On Sat, May 3, 2014 at 7:04 PM, Tathagata Das
wrote:
> I am a little confused about the version of Spark you are using. Are you
> using Spark 0.9.1 that uses
I am a little confused about the version of Spark you are using. Are you
using Spark 0.9.1 that uses scala 2.10.3 ?
TD
On Sat, May 3, 2014 at 6:16 PM, Weide Zhang wrote:
> Hi I'm trying to run the kafka-word-count example in spark2.9.1. I
> encountered some exception when initialize kafka cons
Hi I'm trying to run the kafka-word-count example in spark2.9.1. I
encountered some exception when initialize kafka consumer/producer config.
I'm using scala 2.10.3 and used maven build inside spark streaming kafka
library comes with spark2.9.1. Any one see this exception before?
Thanks,
producer