From the Java SE 7 JavaDocs:
A program element annotated @Deprecated is one that programmers are discouraged
from using, typically because it is dangerous, or because a better alternative
exists. Compilers warn when a deprecated program element is used or overridden
in non-deprecated code.
and from the javadoc page:
A deprecated API is one that you are no longer recommended to use, due to
changes in the API. While deprecated classes, methods, and fields are still
implemented, they may be removed in future implementations, so you should not
use them in new code, and if possible rewrite old code not to use them.
So, yes, deprecation is just a warning to avoid these APIs, but
deprecation is a stronger statement than you're portraying. It's not
fair notice that the API may go away. It's final notice that the API
should go away but for backward compatibility reasons it can't.
Decprecated := don't use. You shouldn't deprecate an API unless there
is an alternative or unless its use is actually dangerous.
Daniel
On 04/01/10 19:23, Chris Douglas wrote:
- de-deprecate "classic" mapred APIs (no Jira issue yet)
Why?
So that folks can be told that if their code compiles without deprecation
warnings against 1.0 then it should work for all 1.x releases.
Deprecation warnings aren't only fair notice that the API may go away.
The classic FileSystem and mapred APIs may never be purged if a
compelling backwards compatibility story is developed. But without
that solution, those applications may, someday, break. Until then, the
deprecation warnings serve not only to steer developers away from code
removed in that hypothetical situation, but also identify those
sections as not actively developed. I'm pretty sure Thread::destroy
still works and it was deprecated in what, 1.1? Deprecation is a
signal that the development effort *may* proceed at the expense of
these APIs, whether in performance, versatility, or- the most extreme
case- removal. Nobody will harm users of these APIs without justifying
why a solution avoiding it is worse.
I don't mind releasing 1.0 with the classic APIs. Given the installed
base, it's probably required. But let's not kill the new APIs by
calling them "experimental," thereby granting the old ones "official"
status at the moment the new ones become viable.
I was thinking that the new APIs should be 'public evolving' in 1.0. The
classic APIs would be 'public stable'. Unless we don't want to reserve the
right to still evolve the new APIs between now and 2.0.
The new APIs are unusable in the 0.20-based 1.0. They'd be added- at
high expense- in 1.2 at the earliest in the structure you've proposed,
since FileContext and mapreduce.lib are only in the 0.21 branch.
Realistically, 2.0- which is what Tom is releasing in your model,
right?- is the first time anyone will consider the new APIs. By that
time, we'll have a larger installed base on the classic APIs,
attracted by the 1.0 label. And the proposal is to cut this 1.0
release concurrently with a 2.0 alpha? A 0.20-based 1.0 will undermine
the new release, again, just as its payload becomes viable.
I did suggest that it would be good to subsequently release a version of
Y!'s 0.20-based security patches as a 1.1 release. That's where Y! will
first qualify security, and it seems a shame not to release that version.
But perhaps this will prove impractical for some reason.
Re-release it in Apache? Why spend the effort to repackage an older
version with fewer features and inferior performance when most of the
work is already in trunk?
It could indeed instead be named 0.20.3, but if we agree that this
(clarified with Tom's annotations) establishes the 1.0 API, then it would be
good to number it as such, no?
I continue to disagree. That the methods are not removed in 1.0 does
not establish them as "the 1.0 API". Nobody has advocated for their
removal- because it would be ruinous to users- but that stance doesn't
require a commitment to those APIs as the only stable ones,
particularly over the APIs designed for backwards compatibility.
I don't see that this would prevent or discourage any other release. Nor
does it require you to backport anything. Any backporting would be
voluntary. Tom's privately told me he doesn't expect it to be difficult to
backport HADOOP-6668& MAPREDUCE-1623 (stability annotations) or
MAPREDUCE-1650 (exclude private from javadoc), and I'm willing to backport
those if he doesn't.
It would require committers and contributors to backport bugs fixed in
2.0 to 1.x. This would not be a voluntary burden borne only by the
willing. Calling 0.20 the basis of 1.0 imposes an even longer life for
that branch that must be endured by everyone working on the project.
And the delta between these releases is not trivial.
It seems you do oppose this proposal. Would you veto code changes required
to make such a release with a technical rationale? Would you vote -1 in the
(majority-based) release vote?
I've said plainly that I oppose it. I don't know what you mean by
vetoing the required code changes. Are you suggesting that I would
sabotage this work by blocking issues from being committed to the
release branch? And yes: right now, I would vote -1 on the release.
Speaking of the release vote process, I renew my request that we
formalize both the RM role and the bylaws. -C