Thanks Cody I was able to find out the issue yesterday after sending the
last email.
On Friday, September 25, 2015, Cody Koeninger wrote:
> So you're still having a problem getting partitions or offsets from kafka
> when creating the stream. You can try each of those kafka operations
> individu
So you're still having a problem getting partitions or offsets from kafka
when creating the stream. You can try each of those kafka operations
individually (getPartitions / getLatestLeaderOffsets)
checkErrors should be dealing with an arraybuffer of throwables, not just a
single one. Is that the
Here is the code snippet, starting line 365 in KafkaCluster.scala:
type Err = ArrayBuffer[Throwable]
/** If the result is right, return it, otherwise throw SparkException */
def checkErrors[T](result: Either[Err, T]): T = {
result.fold(
errs => throw new SparkException(errs.mkString("Throwi
I was able to get pass this issue. I was pointing the SSL port whereas
SimpleConsumer should point to the PLAINTEXT port. But after fixing that I
am getting the following error:
Exception in thread "main" org.apache.spark.SparkException:
java.nio.BufferUnderflowException
at
org.apache.spar
That looks like the OOM is in the driver, when getting partition metadata
to create the direct stream. In that case, executor memory allocation
doesn't matter.
Allocate more driver memory, or put a profiler on it to see what's taking
up heap.
On Thu, Sep 24, 2015 at 3:51 PM, Sourabh Chandak
w
Adding Cody and Sriharsha
On Thu, Sep 24, 2015 at 1:25 PM, Sourabh Chandak
wrote:
> Hi,
>
> I have ported receiver less spark streaming for kafka to Spark 1.2 and am
> trying to run a spark streaming job to consume data form my broker, but I
> am getting the following error:
>
> 15/09/24 20:17:4