Hello,

We are a group of interns at IISc, Bangalore and have tried to implement Clay
Codes <https://www.usenix.org/conference/fast18/presentation/vajha> in
Hadoop by using the Erasure Codec pluggable codec API (HDFS-7337).

Clay codes are erasure codes which have many must have properties and are
well-poised which makes them practical to distributed systems.
They are known to reduce network bandwidth (the amount of data transmitted
on a single node failure), decrease repair times and improve I/O
performance.
The Clay codes work by utilizing the underlying implementation of RS codes
(in fact any MDS code) hence making them lucrative and easy to use/extend.

We have put forward both a design doc and a patch at Hadoop-15558
<https://issues.apache.org/jira/browse/HADOOP-15558>. We would be grateful
if someone can review and suggest further improvements.

*P.S. *Clay codes also have been implemented and are in review at ceph
<https://github.com/ceph/ceph/pull/14300>.


Regards,
M.V.S.Chaitanya & Shreya Gupta

Reply via email to