> We've long-delayed declaring 1.0 because we were afraid to commit to
> supporting a given API for a longer term. Now folks are willing to make
> that long-term commitment to an API, yet seem reluctant to call it 1.0.
The commitment is to the new APIs. "Folks" are reluctant to cut a
release with
Allen Wittenauer wrote:
My main point was that suddenly people seem to be hot to declare something 1.0.
I'm trying to understand why [...]
My rationale for suggesting a release named 1.0 was that I prefer that
release numbers say something about compatibility. The compatibility
rules we've
On Apr 6, 2010, at 6:02 AM, Steve Loughran wrote:
> Allen Wittenauer wrote:
>> On Apr 5, 2010, at 5:06 PM, Chris K Wensel wrote:
>>> we need a well healed 1.0 sooner than later.
>> Why?
>
> I think it would be good for a 0.21 with the newly renamed artifacts
> hadoop-common, hadoop-hdfs and had
Allen Wittenauer wrote:
On Apr 5, 2010, at 5:06 PM, Chris K Wensel wrote:
we need a well healed 1.0 sooner than later.
Why?
I think it would be good for a 0.21 with the newly renamed artifacts
hadoop-common, hadoop-hdfs and hadoop-mapred out there; I think the new
APIs should be made av
Chris Douglas wrote:
Thus far the changes suggested for a 1.0 branch are:
- de-deprecate "classic" mapred APIs (no Jira issue yet)
Why? Tom and Owen's proposal preserves compatibility with the
deprecated FileSystem and mapred APIs up to 1.0. After Tom cuts a
release- from either the 0.21 branc
On Apr 5, 2010, at 5:06 PM, Chris K Wensel wrote:
>
> we need a well healed 1.0 sooner than later.
Why?
> Summarily: given that the APIs are *not* fully functional, preferred
> alternatives in 0.20- we shouldn't base our 1.0 release on it. Do you
> agree? -C
well said, but I still think a release is fine off .20 if we remove the
deprecation warnings (and drop the new apis completely as they add co
> Actually, from my perspective, re the 0.20 branch, they are not preferred
> alternatives and are not complete as more were introduced into .21 (of which
> many are wrappers around the stable apis for sake of transition).
Sorry, I must have been unclear, because this is part of the argument.
Fi
> The APIs at
> issue have preferred alternatives and will be retained for backwards
> compatibility reasons.
Actually, from my perspective, re the 0.20 branch, they are not preferred
alternatives and are not complete as more were introduced into .21 (of which
many are wrappers around the stab
> So, yes, deprecation is just a warning to avoid these APIs, but deprecation
> is a stronger statement than you're portraying. It's not fair notice that
> the API may go away. It's final notice that the API should go away but for
> backward compatibility reasons it can't. Decprecated := don't us
From the Java SE 7 JavaDocs:
A program element annotated @Deprecated is one that programmers are discouraged
from using, typically because it is dangerous, or because a better alternative
exists. Compilers warn when a deprecated program element is used or overridden
in non-deprecated code.
Chris Douglas wrote:
Speaking of the release vote process, I renew my request that we
formalize both the RM role and the bylaws. -C
I think the HTTPD release rules are non-controversial and would support
adoption of something similar. Someone needs to draft a proposal,
initiate a discussion,
Owen O'Malley wrote:
In my experience with releasing Hadoop, the bare minimum of scale
testing is a couple of weeks on 500 nodes (and more is far better) with
a team of people testing it. I think that releasing a 1.0 that has never
been tested at scale would be disastrous.
For the record, I n
On Apr 1, 2010, at 10:50 AM, Doug Cutting wrote:
If it takes months, it is a failure. It should take weeks, if that.
On Apr 1, 2010, at 9:31 PM, Dhruba Borthakur wrote:
We have been testing the HDFS append code for 0.20 (using HDFS-200,
HDFS-142), but I believe it is not ready for production
We have been testing the HDFS append code for 0.20 (using HDFS-200,
HDFS-142), but I believe it is not ready for production yet. I am guessing
that there would be another two months of testing before I would classify
0.20.3 + HDFS-200 as production quality. HDFS-200 touches code paths that
would ge
> Companies wanting a 1.0 product could always pay Cloudera and get a
v2 product.
lol :) good point Allen, lets please *not* adopt a 1.0 labeling for
Apache Hadoop :)
Seriously though, to avoid my previous comment about 1.0 labeling being
misinterpreted, though I think the 1.0 labeling is i
>>> - de-deprecate "classic" mapred APIs (no Jira issue yet)
>>
>> Why?
>
> So that folks can be told that if their code compiles without deprecation
> warnings against 1.0 then it should work for all 1.x releases.
Deprecation warnings aren't only fair notice that the API may go away.
The classic
Hi Guys,
To throw in my 2 cents: it would be really nice to get out a 1.0 branch
based off of 0.20 < it¹s not perfect, but releases never are. That¹s why you
can make more of them. :)
In terms of the significance of the 1.0 labeling, I think it's important for
adoption. I was telling someone at J
Chris Douglas wrote:
- de-deprecate "classic" mapred APIs (no Jira issue yet)
Why?
So that folks can be told that if their code compiles without
deprecation warnings against 1.0 then it should work for all 1.x releases.
I don't mind releasing 1.0 with the classic APIs. Given the installe
LOL, I want a v100! :)
On 4/1/10 2:31 PM, "Allen Wittenauer" wrote:
On 4/1/10 2:15 PM, "Mattmann, Chris A (388J)"
wrote:
> In terms of the significance of the 1.0 labeling, I think it's important for
> adoption.
Companies wanting a 1.0 product could always pay Cloudera and get a v2
product
On 4/1/10 2:15 PM, "Mattmann, Chris A (388J)"
wrote:
> In terms of the significance of the 1.0 labeling, I think it's important for
> adoption.
Companies wanting a 1.0 product could always pay Cloudera and get a v2
product. ;)
Hi Guys,
To throw in my 2 cents: it would be really nice to get out a 1.0 branch
based off of 0.20 < it¹s not perfect, but releases never are. That¹s why you
can make more of them. :)
In terms of the significance of the 1.0 labeling, I think it's important for
adoption. I was telling someone at J
> Thus far the changes suggested for a 1.0 branch are:
> - de-deprecate "classic" mapred APIs (no Jira issue yet)
Why? Tom and Owen's proposal preserves compatibility with the
deprecated FileSystem and mapred APIs up to 1.0. After Tom cuts a
release- from either the 0.21 branch or trunk- then iss
Todd Lipcon wrote:
With HDFS-200 we'd also need HDFS-142
Good to know. I' have to admit to being puzzled by HDFS-200, since
Nicholas resolved it as a duplicate on 7 January, yet Dhruba's continued
to post patches to it.
Dhruba, Stack: do you have any thoughts on the appropriateness of maki
On Thu, Apr 1, 2010 at 10:50 AM, Doug Cutting wrote:
> Chris Douglas wrote:
>
>> Spending the next few months voting and arguing on which
>> patches make it into "new" 0.20 (branched in 2008) instead of
>> addressing these issues is *not* progress. I strongly oppose this.
>>
>
> If it takes month
Chris Douglas wrote:
Spending the next few months voting and arguing on which
patches make it into "new" 0.20 (branched in 2008) instead of
addressing these issues is *not* progress. I strongly oppose this.
If it takes months, it is a failure. It should take weeks, if that.
Thus far the chang
> My latest proposal, a 1.0 branch based on 0.20, contains two questions:
>
> 1. Should we make an Apache release that more closely corresponds to what
> folks are using in production today (and will be using for a while yet)?
>
> 2. If we're considering the 0.20 mapreduce and filesystem APIs to be
Thanks Tom and Owen for stepping up --
We're using 0.20.2 as effectively 1.0 here, too, so I think a 1.0 branch is
a good idea that recognizes that status quo and deal with it, particularly
for having a 1.0 that's pre-split and pre-security (big changes).
Couple random thoughts:
1) I agree with
Chris K Wensel wrote:
are we saying we will de-deprecate the stable APIs in .20, or make the new APIs
introduced in .20 stable?
+1 on removing the deprecations on the stable APIs.
Yes. I too am +1 on removing deprecations in stable, public APIs in a
1.0 release. Code that uses only public
are we saying we will de-deprecate the stable APIs in .20, or make the new APIs
introduced in .20 stable?
+1 on removing the deprecations on the stable APIs.
On Mar 31, 2010, at 2:19 PM, Doug Cutting wrote:
> Konstantin Shvachko wrote:
>> I would like to propose a straightforward release of 0.2
Our org (Trend Micro) will be using an internal build based on 0.20 for at
least the rest of this year. It is, really, already "1.0" from our point of
view, the first ASF Hadoop release officially adopted into our production
environment. I hope other users of Hadoop will speak up on this thread
On 3/31/2010 2:19 PM, Doug Cutting wrote:
> Konstantin Shvachko wrote:
>> I would like to propose a straightforward release of 0.21 from current
>> 0.21 branch.
>
> That could be done too. Would you like to volunteer to drive a release from
> the current 0.21 branch?
I would If I could.
I intende
Konstantin Shvachko wrote:
I would like to propose a straightforward release of 0.21 from current
0.21 branch.
That could be done too. Tom's volunteered to drive a release from trunk
in a few weeks. Owen's volunteered to drive another release from trunk
in about six months. Would you like
If I may pitch in briefly here, believe it or not, there is a lot of
enterprises out there whom think that anything that isn't version 1.0
isn't worth considering, let alone deploying (doesn't make sense, but
some people are like that). Hence, from a market adoption point of view,
Apache Hadoop
[Owen] > I think that we should change the rules so that the remaining
0.X releases are minor releases.
+1
[Owen] > I'll volunteer to be release manager for a release branched
in November, which should be roughly 6 months after Tom's new 0.21
release.
That would be great. Thanks, Owen!
[Doug] >
HDFS 0.20 does not have a reliable append.
Also it is (was last time I looked) incompatible with the 0.21 append HDFS-256.
That wouldn't be a problem if that was the only incompatibility. But it's not.
If 1.0 is re-labeled or re-branched from 0.20 we will have to many
incompatibilities
going int
Allen Wittenauer wrote:
The fact that there are a *ton*
of admin tool changes/fixes/additions in the Yahoo! Distribution of 0.20
(and quite a few in CDH) should be the big hint that Apache 0.20 is *not*
1.0.
Right. I'm proposing we make a 1.0 release that tries to match what
folks are actual
Owen O'Malley wrote:
It is tempting and I think that 0.20 is *really* our 1.0, but I think
re-labeling a release a year after it came out would be confusing.
I wasn't proposing just a re-labeling. I was proposing a new release,
branched from 0.20 rather than trunk. We'd introduce some change
On 3/30/10 8:22 PM, "Owen O'Malley" wrote:
>
> On Mar 30, 2010, at 3:40 PM, Doug Cutting wrote:
>
>> Another release we might consider is 1.0 based on 0.20.
>
> It is tempting and I think that 0.20 is *really* our 1.0, but I think
> re-labeling a release a year after it came out would be co
Hi,
I'm glad we're heading towards a release. We'd like to better understand some
aspects regarding the release plan.
What would be the tentative release schedule, and what affects particular
releases? We could either continue with our current version or plan based on
what's going to be relea
On Mar 30, 2010, at 3:40 PM, Doug Cutting wrote:
Another release we might consider is 1.0 based on 0.20.
It is tempting and I think that 0.20 is *really* our 1.0, but I think
re-labeling a release a year after it came out would be confusing.
I think that we should change the rules so that
> A 1.0 release based off 0.20 would give us a chance to state more precisely
> the 1.0 API that we intend to support long-term. For example, we might
> un-mark the old mapreduce APIs as deprecated in a 1.0 release, and mark the
> new mapreduce APIs as experimental and unstable there. Programs
Tom White wrote:
I think the focus should be on getting an alpha release
out, so I suggest we create a new 0.21 branch from trunk
Another release we might consider is 1.0 based on 0.20. We'd then have
releases that correspond to what folks are actually using in production.
This would also r
Stack wrote:
Getting a release out is critical. Otherwise, IMO, the project is
dead but for the stiffening.
Thanks Tom for stepping up to play the RM role for a 0.21.
Regarding Steve's call for what we can offer Tom to help along the
release, the little flea hbase can test its use case on 0.21
On Fri, Mar 26, 2010 at 11:43 AM, Owen O'Malley wrote:
>
> On Mar 24, 2010, at 4:25 PM, Tom White wrote:
>
>> I agree that getting the release process restarted is of utmost
>> importance to the project. To help make that happen I'm happy to
>> volunteer to be a release manager for the next releas
> Thanks Tom for stepping up to play the RM role for a 0.21.
+1 Thanks Tom.
> Regarding Steve's call for what we can offer Tom to help along the
> release, the little flea hbase can test its use case on 0.21.0
> candidates and we can probably take on a few of the HDFS blockers. I
> also like Ste
Getting a release out is critical. Otherwise, IMO, the project is
dead but for the stiffening.
Thanks Tom for stepping up to play the RM role for a 0.21.
Regarding Steve's call for what we can offer Tom to help along the
release, the little flea hbase can test its use case on 0.21.0
candidates a
On Mar 24, 2010, at 4:25 PM, Tom White wrote:
I agree that getting the release process restarted is of utmost
importance to the project. To help make that happen I'm happy to
volunteer to be a release manager for the next release. This will be
the first release post-split, so there will undoubt
On Wed, Mar 24, 2010 at 01:27PM, Brian Bockelman wrote:
> a) Have a stable/unstable series (0.19.x is unstable, 0.20.x is stable,
> 0.21.x is unstable). For the unstable releases, lower the bar for code
> acceptance for less-risky patches.
I can see how the different criteria of patch acceptanc
Tom White wrote:
I agree that getting the release process restarted is of utmost
importance to the project. To help make that happen I'm happy to
volunteer to be a release manager for the next release. This will be
the first release post-split, so there will undoubtedly be some issues
to work out
Hey Tom,
That sounds like a great idea. +1.
Thanks,
Jeff
On Wed, Mar 24, 2010 at 4:25 PM, Tom White wrote:
> I agree that getting the release process restarted is of utmost
> importance to the project. To help make that happen I'm happy to
> volunteer to be a release manager for the next relea
I agree that getting the release process restarted is of utmost
importance to the project. To help make that happen I'm happy to
volunteer to be a release manager for the next release. This will be
the first release post-split, so there will undoubtedly be some issues
to work out. I think the focus
Hey Allen,
Your post provoked a few thoughts:
1) Hadoop is a large, but relatively immature project (as in, there's still a
lot of major features coming down the pipe). If we wait to release on
features, especially when there are critical bugs, we end up with a large
number of patches between
On 3/15/10 9:06 AM, "Owen O'Malley" wrote:
> From our 21 experience, it looks like our old release strategy is
> failing.
Maybe this is a dumb question but... Are we sure it isn't the community
failing?
From where I stand, the major committers (PMC?) have essentially forked
Hadoop into
Hey Owen,
Which aspects of the HTTPD release strategy do you find most useful compared
to the current Hadoop release strategy?
Thanks,
Jeff
On Mon, Mar 15, 2010 at 8:06 AM, Owen O'Malley wrote:
> From our 21 experience, it looks like our old release strategy is failing.
> In looking around, I
From our 21 experience, it looks like our old release strategy is
failing. In looking around, I found that HTTPD's release strategy is
extremely different and seems much more likely to produce usable
releases. It is well worth reading, in my opinion.
http://httpd.apache.org/dev/release.html
56 matches
Mail list logo