The behvior is exactly what I expected. Thanks Akhil and Tathagata!
bit1...@163.com
From: Akhil Das
Date: 2015-02-24 13:32
To: bit1129
CC: Tathagata Das; user
Subject: Re: Re: About FlumeUtils.createStream
That depends on how many machines you have in your cluster. Say you have 6
workers and
That depends on how many machines you have in your cluster. Say you have 6
workers and its most likely it is to be distributed across all worker
(assuming your topic has 6 partitions). Now when you have more than 6
partition, say 12. Then these 6 receivers will start to consume from 2
partitions at
Distributed among cluster nodes.
On Mon, Feb 23, 2015 at 8:45 PM, bit1...@163.com wrote:
> Hi, Akhil,Tathagata,
>
> This leads me to another question ,For the Spark Streaming and Kafka
> Integration, If there are more than one Receiver in the cluster, such as
> val streams = (1 to 6).map ( _ =
Hi, Akhil,Tathagata,
This leads me to another question ,For the Spark Streaming and Kafka
Integration, If there are more than one Receiver in the cluster, such as
val streams = (1 to 6).map ( _ => KafkaUtils.createStream(ssc, zkQuorum,
group, topicMap).map(_._2) ),
then these Receivers will
Thanks both of you guys on this!
bit1...@163.com
From: Akhil Das
Date: 2015-02-24 12:58
To: Tathagata Das
CC: user; bit1129
Subject: Re: About FlumeUtils.createStream
I see, thanks for the clarification TD.
On 24 Feb 2015 09:56, "Tathagata Das" wrote:
Akhil, that is incorrect.
Spark will li