Clearly your input size isn't changing. And depending on how they are 
distributed on the nodes, there could be Datanode/disks contention.

The better way to model this is by scaling the input data also linearly. More 
nodes should process more data in the same amount of time.

Thanks,
+Vinod

On Sep 6, 2013, at 8:27 AM, 牛兆捷 wrote:

> Hi all:
> 
> I vary the computational nodes of cluster and get the speedup result in 
> attachment.
> 
> In my mind, there are three type of speedup model: linear, sub-linear and 
> super-linear. However the curve of my result seems a little strange. I have 
> attached it.
> <speedup.png>
> 
> This is sort in example.jar, actually it is done only using the default 
> map-reduce mechanism of Hadoop.
> 
> I use hadoop-1.2.1, set 8 map slots and 8 reduce slots per node(12 cpu, 20g 
> men)
>  io.sort.mb = 512, block size = 512mb, heap size = 1024mb,  reduce.slowstart 
> = 0.05, the others are default.
> 
> Input data: 20g, I divide it to 64 files
> 
> Sort example: 64 map tasks, 64 reduce tasks
> 
> Computational nodes: varying from 2 to 9
> 
> Why the speedup mechanism is like this? How can I model it properly?
> 
> Thanks~
> 
> -- 
> Sincerely,
> Zhaojie
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Reply via email to