I see. In order to beat 3 replications in the perspective of IO, we
need to generate parity blocks carefully. We can't simply buffer
source blocks on the local disk and then generate parity blocks, which
requires two extra disk IOs.
One problem with parity blocks is that parity blocks can't work as
distributed cache at all. I wonder how much read performance can be
degraded. Is there any study on read performance when the cluster has
one replication for each block vs. 3 replications?

Thanks,
Da

On Tue, Nov 1, 2011 at 10:09 AM, Robert Evans <ev...@yahoo-inc.com> wrote:
> Our largest cluster is several thousand nodes and we still run with a 
> replication factor of 3.  We have not seen any benefit from having a larger 
> replication factor except when it is a resource that lots of machines will 
> use, aka distributed cache.  Other then that 3 seems just fine for most 
> map/reduce processing.
>
> --Bobby Evans
>
> On 10/31/11 2:50 PM, "Zheng Da" <zhengda1...@gmail.com> wrote:
>
> Hello Ram,
>
> Sorry, I didn't notice your reply.
>
> I don't really have a complete design in my mind. I wonder if the
> community is interested in using an alternative scheme to support data
> reliability and if the community plans to do it.
>
> You are right, we might need to buffer the source blocks on the local
> disk, and using parity blocks might not gain much advantage when we
> try to achieve the same reliability as achieved with a small
> replication factor. I think in the larger HDFS cluster we need a large
> replication factor (>= 3), right? Furthermore, network bandwidth is
> scarce resource, so it's more important to save network bandwidth.
>
> Thanks,
> Da
>
> On Tue, Oct 25, 2011 at 12:51 AM, Ramkumar Vadali
> <ramkumar.vad...@gmail.com> wrote:
>> (sorry for the delay in replying)
>>
>> Hi Zheng
>>
>> You are right about HDFS RAID. It is used to save space, and is not involved
>> in the file write path. The generation of parity blocks and reducing
>> replication factor happens after a configurable amount of time.
>>
>> What is the design you have in mind? When the HDFS file is being written,
>> the data is generated block-by-block. But generating parity blocks will
>> require multiple source blocks to be ready, so the writer will need to
>> buffer the original data, either in memory or on disk. If it is saved on
>> disk because of memory pressure, will this be similar to writing the file
>> with replication 2?
>>
>> Ram
>>
>>
>> On Thu, Oct 13, 2011 at 1:16 AM, Zheng Da <zhengda1...@gmail.com> wrote:
>>
>>> Hello all,
>>>
>>> Right now HDFS is still using simple replication to increase data
>>> reliability. Even though it works, it just wastes the disk space,
>>> network and disk bandwidth. For data-intensive applications (that
>>> needs to write large result to the HDFS), it just limits the
>>> throughput of MapReduce. Also it's very energy-inefficient.
>>>
>>> Is the community trying to use erasure code to increase data
>>> reliability? I know someone is working on HDFS-RAID, but it can only
>>> solve the problem in disk space. In many case, network and disk
>>> bandwidth are more important, which are the factors limiting the
>>> throughput of MapReduce. Has anyone tried to use erasure code to
>>> reduce the size of data when data is written to HDFS? I know reducing
>>> replications might hurt the read performance, but I think it's still
>>> important to reduce the data size writing to HDFS.
>>>
>>> Thanks,
>>> Da
>>>
>>
>
>

Reply via email to