Some of the information in the email is not correct.  Let me clarify them.
 
> Where we are today.. append was added in the 0.17-19
releases
> (HADOOP-1700) . . .
 
We never have append/sync in 0.17.  Sync was added to 0.18 but not append.  
Append was added to 0.19.  By append/sync above, I mean the
implementation by HADOOP-1700.  We also
have HDFS-265, the new append/hflush.  Below are the details.
 
Versions         Features
<= 0.17:          no sync/append
0.18:               1700
sync
0.19.0:             1700
append
0.19.1, 0.20:   1700 append disabled
0.20-append:append branch used by facebook
0.20.205.0:     merged 1700 append to 0.20
>= 0.21:          265 append/hflush
 
> . . . To my knowledge, there has
been no real production use. . .
 
The reason of no production use today
is simply that append is not yet in a stable release.  Besides, it does not 
mean append is not
useful.
 
> . . . The design however, is much
improved, and people think we can get
> hsync (and append) stabilized in
trunk (mostly testing and bug fixing).
 
hsync is not yet implemented.  I think you may mean hflush.
 
> . . . This probably explains why,
over 5 years after the original implementation
> started, we don't have a stable
release with append.
 
HADOOP-1700 was committed on July 25,
2008.  I don’t know how it could be “over
5 years”.  It is well known that append
from 0.20.x releases is not stable and hence probably not used.  It is not the 
case that we don’t have a
stable release because append is not stable.
 
> Append introduces non-trivial
design and code complexity, which is not
> worth the cost if we don't have
real users. . . .
 
I don’t agree.  The non-trivial design and code complexity
come from hflush but not append.  Once we
have hflush, append is straightforward.  Roughly speaking, the append work is 
about 10% of the entire
append/hflush work.
 
Moreover, there are real users/use
cases as mentioned by Dave and Milind.
 
The jira that you have created to split
the flag into hflush supported and append supported is a good idea. Folks who
do not need append, but need hflush, can still disable append.
 
Regards,
Nicholas



________________________________
 From: Eli Collins <e...@cloudera.com>
To: hdfs-dev@hadoop.apache.org 
Sent: Tuesday, March 20, 2012 5:37 PM
Subject: [DISCUSS] Remove append?
 
Hey gang,

I'd like to get people's thoughts on the following proposal. I think
we should consider removing append from HDFS.

Where we are today.. append was added in the 0.17-19 releases
(HADOOP-1700) and subsequently disabled (HADOOP-5224) due to quality
issues. It and sync were re-designed, re-implemented, and shipped in
21.0 (HDFS-265). To my knowledge, there has been no real production
use. Anecdotally people who worked on branch-20-append have told me
they think the new trunk code is substantially less well-tested than
the branch-20-append code (at least for sync, append was never well
tested). It has certainly gotten way less pounding from HBase users.
The design however, is much improved, and people think we can get
hsync (and append) stabilized in trunk (mostly testing and bug
fixing).

Rationale follows..

Append does not seem to be an important requirement, hflush was. There
has not been much demand for append, from users or downstream
projects. Because Hadoop 1.x does not have a working append
implementation (see HDFS-3120, the branch-20-append work was focused
on sync not getting append working) which is not enabled by default
and downstream projects will want to support Hadoop 1.x releases for
years, most will not introduce dependencies on append anyway. This is
not to say demand does not exist, just that if it does, it's been much
smaller than security, sync, HA, backwards compatbile RPC, etc. This
probably explains why, over 5 years after the original implementation
started, we don't have a stable release with append.

Append introduces non-trivial design and code complexity, which is not
worth the cost if we don't have real users. Removing append means we
have the property that HDFS blocks, when finalized, are immutable.
This significantly simplifies the design and code, which significantly
simplifies the implementation of other features like snapshots,
HDFS-level caching, dedupe, etc.

The vast majority of the HDFS-265 effort is still leveraged w/o
append. The new data durability and read consistency behavior was the
key part.

GFS, which HDFS' design is based on, has append (and atomic record
append) so obviously a workable design does not preclude append.
However we also should not ape the GFS feature set simply because it
exists. I've had conversations with people who worked on GFS that
regret adding record append (see also
http://queue.acm.org/detail.cfm?id=1594206). In short, unless append
is a real priority for our users I think we should focus our energy
elsewhere.

Thanks,
Eli

Reply via email to