Hi Pankaj,
>> After the second consumer group comes upDo you mean a second consumer starts
>> with the same consumer group as the first ?
createDirectStream is overloaded. One of the method, doesn't need you to
specify partitions of a topic.
Cheers
- Sree
On Thursday, June 8, 2017 9:56 AM
Hi,
Thank you for your reply!
You got it right! I am trying to run multiple streams using the same
consumer, so that I can distribute different partitions among different
instances of the consumer group. I don¹t want to provide the list of
partitions in createDirectStream API. If I do that then i
g.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:247)
> at
> org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:183)
> at
> org.apache.spark.streaming.scheduler.JobGenerator$$anon$1
(EventLoop.scala:48)
I see that there is Spark ticket opened with the same
issue(https://issues.apache.org/jira/browse/SPARK-19547) but it has been marked
as INVALID. Can someone explain why this ticket is marked INVALID.
Thanks,
Pankaj