( this message is not intended for specific folks, by mistake, but for all the 
hdfs-dev list, deliberately;)

Hello Folks,

I do not want to scratch the already bleeding wounds, and want to resolve these 
issues amicably, without causing a big inter-vendor confrontation.

So, these are the facts, as I (and several others in the hadoop community) see 
this.

1. there was an attempt to separate different hadoop projects, such as common, 
hdfs, mapreduce.

2. that attempt was aborted because of several things. common ownership, i.e. 
committership being the biggest issue.

3. in the meanwhile, several important, release-worthy, hdfs improvements were 
committed to Hadoop. (Thats why I supported Konst's appeal for 0.22. And also 
incorporated into Hadoop products by the largest hadoop ecosystem contributor, 
and several others.)

4. All the apache hadoop bylaws were followed, to get these improvements into 
Hadoop project.

5. Yet, common project, which is not even a top-level project, since the 
awkward re-merge happened, got an invompatible wire-protocol change, which was 
accepted and promoted by a specific section, in spite of kicking and screaming 
of (what I think of) a representative of a large hadoop user community.

6. That, and such other changes, has created a big issue for a part of the 
community which has tested hdfs part of 2.x and has spent a lot of efforts to 
stabilize hdfs, since this was the major part of assault from proprietary 
storage systems, such as You-Know-Who.

I would like to raise this issue as an individual, regardless of my 
affiliation, so that, we can make hdfs worthy of its association with the top 
level ecosystem, without being closely associated with it.

What do the hdfs developers think? 

- milind

Sent from my iPhone

Reply via email to