Hello Lin Hao

Thanks for your reply. I will try to produce more data into Kafka.

I run three Kafka borkers. Following is my topic info.

Topic:kyle_test_topic PartitionCount:3 ReplicationFactor:2 Configs:
Topic: kyle_test_topic Partition: 0 Leader: 3 Replicas: 3,4 Isr: 3,4
Topic: kyle_test_topic Partition: 1 Leader: 4 Replicas: 4,5 Isr: 4,5
Topic: kyle_test_topic Partition: 2 Leader: 4 Replicas: 5,4 Isr: 4,5

Kyle

2015-04-30 14:43 GMT+08:00 Lin Hao Xu <xulin...@cn.ibm.com>:

> It seems that the data size is only 2.9MB, far less than the default rdd
> size. How about put more data into kafka? and what about the number of
> topic partitions from kafka?
>
> Best regards,
>
> Lin Hao XU
> IBM Research China
> Email: xulin...@cn.ibm.com
> My Flickr: http://www.flickr.com/photos/xulinhao/sets
>
> [image: Inactive hide details for Kyle Lin ---2015/04/30 14:39:32---Hi all
> My environment info]Kyle Lin ---2015/04/30 14:39:32---Hi all My
> environment info
>
> From: Kyle Lin <kylelin2...@gmail.com>
> To: "user@spark.apache.org" <user@spark.apache.org>
> Date: 2015/04/30 14:39
> Subject: The Processing loading of Spark streaming on YARN is not in
> balance
> ------------------------------
>
>
>
> Hi all
>
> My environment info
> Hadoop release version: HDP 2.1
> Kakfa: 0.8.1.2.1.4.0
> Spark: 1.1.0
>
> My question:
>     I ran Spark streaming program on YARN. My Spark streaming program will
> read data from Kafka and doing some processing. But, I found there is
> always only ONE executor under processing. As following table, I had 10
> work executors, but only Executor No.5 is running now. And all RRD Blocks
> are in Exectuors No.5. For my situation, the processing looks not
> distributed. How can I make all executors runs together.
>
>  *Executor ID*
> *Address*
> *RDD Blocks*
> *Memory Used*
> *Disk Used*
> *Active Tasks*
> *Failed Tasks*
> *Complete Tasks*
> *Total Tasks*
> *Task Time*
> *Input*
> *Shuffle Read*
> *Shuffle Write*
>
>    1
>
>
>    slave03:38662
>
>
>    0
>
>
>    0.0 B / 265.4 MB
>
>
>    0.0 B
>
>
>    0
>
>
>    0
>
>
>    58
>
>
>    58
>
>
>    24.9 s
>
>
>    0.0 B
>
>
>    1197.0 B
>
>
>    440.5 KB
>
>
>    10
>
>
>    slave05:36992
>
>
>    0
>
>
>    0.0 B / 265.4 MB
>
>
>    0.0 B
>
>
>    0
>
>
>    0
>
>
>    44
>
>
>    44
>
>
>    30.9 s
>
>
>    0.0 B
>
>
>    0.0 B
>
>
>    501.7 KB
>
>
>    2
>
>
>    slave02:40250
>
>
>    0
>
>
>    0.0 B / 265.4 MB
>
>
>    0.0 B
>
>
>    0
>
>
>    0
>
>
>    30
>
>
>    30
>
>
>    19.6 s
>
>
>    0.0 B
>
>
>    855.0 B
>
>
>    1026.0 B
>
>
>    3
>
>
>    slave01:40882
>
>
>    0
>
>
>    0.0 B / 265.4 MB
>
>
>    0.0 B
>
>
>    0
>
>
>    0
>
>
>    28
>
>
>    28
>
>
>    20.7 s
>
>
>    0.0 B
>
>
>    1197.0 B
>
>
>    1026.0 B
>
>
>    4
>
>
>    slave04:57068
>
>
>    0
>
>
>    0.0 B / 265.4 MB
>
>
>    0.0 B
>
>
>    0
>
>
>    0
>
>
>    29
>
>
>    29
>
>
>    20.9 s
>
>
>    0.0 B
>
>
>    1368.0 B
>
>
>    1026.0 B
>
>
>    5
>
>
>    slave05:40191
>
>
>    23
>
>
>    2.9 MB / 265.4 MB
>
>
>    0.0 B
>
>
>    1
>
>
>    0
>
>
>    3928
>
>
>    3929
>
>
>    5.1 m
>
>
>    2.7 MB
>
>
>    261.7 KB
>
>
>    564.4 KB
>
>
>    6
>
>
>    slave03:35515
>
>
>    0
>
>
>    0.0 B / 265.4 MB
>
>
>    0.0 B
>
>
>    1
>
>
>    0
>
>
>    47
>
>
>    48
>
>
>    23.1 s
>
>
>    0.0 B
>
>
>    513.0 B
>
>
>    400.4 KB
>
>
>    7
>
>
>    slave02:40325
>
>
>    0
>
>
>    0.0 B / 265.4 MB
>
>
>    0.0 B
>
>
>    0
>
>
>    0
>
>
>    30
>
>
>    30
>
>
>    20.2 s
>
>
>    0.0 B
>
>
>    855.0 B
>
>
>    1197.0 B
>
>
>    8
>
>
>    slave01:48609
>
>
>    0
>
>
>    0.0 B / 265.4 MB
>
>
>    0.0 B
>
>
>    0
>
>
>    0
>
>
>    28
>
>
>    28
>
>
>    20.5 s
>
>
>    0.0 B
>
>
>    1363.0 B
>
>
>    1026.0 B
>
>
>    9
>
>
>    slave04:33798
>
>
>    0
>
>
>    0.0 B / 265.4 MB
>
>
>    0.0 B
>
>
>    0
>
>
>    0
>
>
>    4712
>
>
>    4712
>
>
>    8.5 m
>
>
>    11.0 MB
>
>
>    1468.0 KB
>
>
>    1026.0 B
>
>
>    <driver>
>
>
>    slave01:39172
>
>
>    0
>
>
>    0.0 B / 265.1 MB
>
>
>    0.0 B
>
>
>    0
>
>
>    0
>
>
>    0
>
>
>    0
>
>
>    0 ms
>
>
>    0.0 B
>
>
>    0.0 B
>
>
>    0.0 B
>
>
> Kyle
>
>

Reply via email to