Have you tried with  kafkaStream.foreachRDD(rdd => {rdd.foreach(...)} ?
Would that make a difference?


On Thu, Dec 11, 2014 at 10:24 AM, david <david...@free.fr> wrote:

> Hi,
>
>   We use the following Spark Streaming code to collect and process Kafka
> event :
>
>     kafkaStream.foreachRDD(rdd => {
>       rdd.collect().foreach(event => {
>           process(event._1, event._2)
>       })
>     })
>
> This work fine.
>
> But without /collect()/ function, the following exception is raised for
> call
> to function process:
>     *Loss was due to java.lang.ExceptionInInitializerError*
>
>
>   We attempt to rewrite like this but the same exception is raised :
>
>      kafkaStream.foreachRDD(rdd => {
>       rdd.foreachPartition(iter =>
>         iter.foreach (event => {
>         process(event._1, event._2)
>       })
>       )
>     })
>
>
> Does anybody can explain to us why and how to solve this issue ?
>
> Thank's
>
> Regards
>
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-steaming-work-with-collect-but-not-without-collect-tp20622.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to