[jira] [Created] (HDFS-2868) Add number of active transfer threads to the DataNode metrics

2012-02-01 Thread Harsh J (Created) (JIRA)
Add number of active transfer threads to the DataNode metrics
-

 Key: HDFS-2868
 URL: https://issues.apache.org/jira/browse/HDFS-2868
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor


Presently, we do not provide any metrics from the DN that specifically 
indicates the total number of active transfer threads (xceivers). Having such a 
metric can be very helpful as well, over plain num-ops(type) form of metrics, 
which already exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Jenkins build is back to stable : Hadoop-Hdfs-trunk #943

2012-02-01 Thread Apache Jenkins Server
See 




Jenkins build is back to stable : Hadoop-Hdfs-0.23-Build #156

2012-02-01 Thread Apache Jenkins Server
See 




[jira] [Created] (HDFS-2869) Error in Webhdfs documentation for mkdir

2012-02-01 Thread Harsh J (Created) (JIRA)
Error in Webhdfs documentation for mkdir


 Key: HDFS-2869
 URL: https://issues.apache.org/jira/browse/HDFS-2869
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.0, 0.23.1
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor


Reported over the lists by user Stuti Awasthi:

{quote}

I have tried the webhdfs functionality of Hadoop-1.0.0 and it is working fine.
Just a small change is required in the documentation :

Make a Directory declaration in documentation:
curl -i -X PUT "http://:/?op=MKDIRS[&permission=]"

Gives following error :
HTTP/1.1 405 HTTP method PUT is not supported by this URL
Content-Length: 0
Server: Jetty(6.1.26)

Correction Required : This works for me
curl -i -X PUT "http://:/*webhdfs/v1/*?op=MKDIRS"
{quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2529) lastDeletedReport should be scoped to BPOfferService, not DN

2012-02-01 Thread Uma Maheswara Rao G (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-2529.
---

  Resolution: Duplicate
Release Note: Since this is covered as part of HDFS-2560, duplicated with 
it.

> lastDeletedReport should be scoped to BPOfferService, not DN
> 
>
> Key: HDFS-2529
> URL: https://issues.apache.org/jira/browse/HDFS-2529
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>
> Each BPOfferService separately tracks and reports deleted blocks. But, 
> lastDeletedReport is a member variable in DataNode, so deletion reports may 
> not be triggered on the desired interval.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2870) HA: Remove some INFO level logging accidentally left around

2012-02-01 Thread Todd Lipcon (Created) (JIRA)
HA: Remove some INFO level logging accidentally left around
---

 Key: HDFS-2870
 URL: https://issues.apache.org/jira/browse/HDFS-2870
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: HA branch (HDFS-1623)
Reporter: Todd Lipcon
Priority: Trivial
 Attachments: hdfs-2870.txt

Currently the NN is logging a line per block at INFO level in 
processMisReplicatedBlocks. This was just for debugging, and should be at trace 
level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2870) HA: Remove some INFO level logging accidentally left around

2012-02-01 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-2870.
---

   Resolution: Fixed
Fix Version/s: HA branch (HDFS-1623)
 Hadoop Flags: Reviewed

Committed to branch, thanks.

> HA: Remove some INFO level logging accidentally left around
> ---
>
> Key: HDFS-2870
> URL: https://issues.apache.org/jira/browse/HDFS-2870
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Todd Lipcon
>Priority: Trivial
> Fix For: HA branch (HDFS-1623)
>
> Attachments: hdfs-2870.txt
>
>
> Currently the NN is logging a line per block at INFO level in 
> processMisReplicatedBlocks. This was just for debugging, and should be at 
> trace level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2871) Add a "no-block-delete" mode for use in recovery operations

2012-02-01 Thread Todd Lipcon (Created) (JIRA)
Add a "no-block-delete" mode for use in recovery operations
---

 Key: HDFS-2871
 URL: https://issues.apache.org/jira/browse/HDFS-2871
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Todd Lipcon


Occasionally due to operator error or some other reason, the NN metadata is 
accidentally lost or corrupted. When recovering the cluster, we usually 
recommend that the operator force the NN into safe mode at startup. But often 
this isn't quite enough, since they need to exit safe mode in order to start 
other dependent services like HBase. It would be nice to add a safety config 
for cases like this, that allows the NN to exit safe mode, but does not issue 
any block deletions. Thus, if there is some problem, the NN could safely be 
restarted again with different metadata with no (or minimized) dataloss.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2872) Add sanity checks during edits loading that generation stamps are non-decreasing

2012-02-01 Thread Todd Lipcon (Created) (JIRA)
Add sanity checks during edits loading that generation stamps are non-decreasing


 Key: HDFS-2872
 URL: https://issues.apache.org/jira/browse/HDFS-2872
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 1.0.0
Reporter: Todd Lipcon


In 0.23 and later versions, we have a txid per edit, and the loading process 
verifies that there are no gaps. Lacking this in 1.0, we can use generation 
stamps as a proxy - the OP_SET_GENERATION_STAMP opcode should never result in a 
decreased genstamp. If it does, that would indicate that the edits are corrupt, 
or older edits are being applied to a newer checkpoint, for example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2873) Add sanity check during image/edit loading that blocks aren't doubly referenced

2012-02-01 Thread Todd Lipcon (Created) (JIRA)
Add sanity check during image/edit loading that blocks aren't doubly referenced
---

 Key: HDFS-2873
 URL: https://issues.apache.org/jira/browse/HDFS-2873
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Todd Lipcon


We recently had an incident where an invalid image was accidentally fed into a 
NN. This image had property that multiple inodes referred to the same set of 
blocks -- as if there were hard links, but without any reference counting 
mechanisms. The NN happily loaded this, but of course it later caused problems 
when one of the inodes was deleted. We should be able to add a sanity check 
when replaying OP_ADD that, if the blocks already exist in the system, they are 
referenced by the same inode in that OP_ADD.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




a newbie question

2012-02-01 Thread Peter S
Hi Dev team,

I am a newbie to HDFS. I just decide to use HDFS in my project and
contributes patches to HDFS in the future. However, I am a bit confused
about the different hadoop branches and code repo which blocks me to setup
my local dev environment.

I realize the version numbers are actually different branches, e.g.
branch-0.21, branch-0.22, branch-0.23, and trunk. I am wondering which
branch should I use and develop on? Which one is most stable? Does trunk
include all patches in other branches such as branch-0.21?

For hadoop common repo, which version should I use for a particular hdfs
version?

Also, I realize the git repo is not updated on time when code is committed
to the svn repo. Should I use the git repo for dev or build a local git
repo on top of svn repo instead?

Sorry for the naive questions. Your help is highly appreciated.

Regards,
Peter


[jira] [Resolved] (HDFS-2859) LOCAL_ADDRESS_MATCHER.match has NPE when called from DFSUtil.getSuffixIDs when the host is incorrect

2012-02-01 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-2859.
---

   Resolution: Fixed
Fix Version/s: HA branch (HDFS-1623)
 Hadoop Flags: Reviewed

I committed this. I also fixed an extra unneeded import of NNStorage which was 
in the patch before commit. In the future, can you also please attach "p0" 
patches rather than p1? ie use git diff --no-prefix? Thanks!

> LOCAL_ADDRESS_MATCHER.match has NPE when called from DFSUtil.getSuffixIDs 
> when the host is incorrect
> 
>
> Key: HDFS-2859
> URL: https://issues.apache.org/jira/browse/HDFS-2859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Bikas Saha
>Assignee: Bikas Saha
>Priority: Minor
> Fix For: HA branch (HDFS-1623)
>
> Attachments: HDFS-2859.HDFS-1623.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2874) HA: edit log should log to shared dirs before local dirs

2012-02-01 Thread Todd Lipcon (Created) (JIRA)
HA: edit log should log to shared dirs before local dirs


 Key: HDFS-2874
 URL: https://issues.apache.org/jira/browse/HDFS-2874
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: HA branch (HDFS-1623)
Reporter: Todd Lipcon
Priority: Critical


Currently, the NN logs its edits to each of its edits directories in sequence. 
This can produce the following bad sequence:
- NN accumulates 100 edits (tx 1-100) in the buffer. Writes and syncs to local 
drive, then crashes
- Failover occurs. SBN takes over at txid=1, since txid 1 never got writen.
- First NN restarts. It reads up to txid 100 from its local directories. It is 
now "ahead" of the active NN with inconsistent state.
The solution is to write to the shared edits dir, and sync that, before writing 
to any local drives.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2875) Clean up LeaseManager#changeLease

2012-02-01 Thread Aaron T. Myers (Created) (JIRA)
Clean up LeaseManager#changeLease
-

 Key: HDFS-2875
 URL: https://issues.apache.org/jira/browse/HDFS-2875
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Priority: Minor


The code for {{LeaseManager#changeLease}} and the associated 
{{FSNamesystem#unprotectedChangeLease}} is very fragile, and can be improved in 
several ways.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2876) The unit tests (src/test/unit) are not being compiled and are not runnable

2012-02-01 Thread Eli Collins (Created) (JIRA)
The unit tests (src/test/unit) are not being compiled and are not runnable
--

 Key: HDFS-2876
 URL: https://issues.apache.org/jira/browse/HDFS-2876
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.23.0
Reporter: Eli Collins


The unit tests (src/test/unit not src/test/java) are not being compiled and are 
not runnable. {{mvn -Dtest=TestBlockRecovery test}} executed from 
hadoop-hdfs-project does not compile or execute the test.
TestBlockRecovery does not compile yet this test target completes w/o error. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2877) If locking of a storage dir fails, it will remove the other NN's lock file on exit

2012-02-01 Thread Todd Lipcon (Created) (JIRA)
If locking of a storage dir fails, it will remove the other NN's lock file on 
exit
--

 Key: HDFS-2877
 URL: https://issues.apache.org/jira/browse/HDFS-2877
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 1.0.0, 0.23.0, 0.24.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon


In {{Storage.tryLock()}}, we call {{lockF.deleteOnExit()}} regardless of 
whether we successfully lock the directory. So, if another NN has the directory 
locked, then we'll fail to lock it the first time we start another NN. But our 
failed start attempt will still remove the other NN's lockfile, and a second 
attempt will erroneously start.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2876) The unit tests (src/test/unit) are not being compiled and are not runnable

2012-02-01 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-2876.
---

Resolution: Duplicate

> The unit tests (src/test/unit) are not being compiled and are not runnable
> --
>
> Key: HDFS-2876
> URL: https://issues.apache.org/jira/browse/HDFS-2876
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.23.0
>Reporter: Eli Collins
>
> The unit tests (src/test/unit not src/test/java) are not being compiled and 
> are not runnable. {{mvn -Dtest=TestBlockRecovery test}} executed from 
> hadoop-hdfs-project does not compile or execute the test.
> TestBlockRecovery does not compile yet this test target completes w/o error. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: a newbie question

2012-02-01 Thread Harsh J
Peter,

(Inline)

On Thu, Feb 2, 2012 at 3:06 AM, Peter S  wrote:
> I realize the version numbers are actually different branches, e.g.
> branch-0.21, branch-0.22, branch-0.23, and trunk. I am wondering which
> branch should I use and develop on? Which one is most stable?

All new work is to go to trunk (presently numbered as 0.24).

The page at http://wiki.apache.org/hadoop/HowToContribute goes over
setting up the trunk development workspace and should help you get
started.

What do you mean by "most stable"? Do you ask in terms of build stability?

> Does trunk include all patches in other branches such as branch-0.21?

Yes, the branches are progressive.

>
> For hadoop common repo, which version should I use for a particular hdfs
> version?

Don't quite get this question.

> Also, I realize the git repo is not updated on time when code is committed
> to the svn repo. Should I use the git repo for dev or build a local git
> repo on top of svn repo instead?

Using a slightly-lagged git mirror or the subversion repository is
your choice -- I prefer to use the former while working. The only part
to remember about using git is to use the --no-prefix operator in
git-diff command while generating patches for contribution via the
JIRA.

-- 
Harsh J
Customer Ops. Engineer
Cloudera | http://tiny.cloudera.com/about


Re: a newbie question

2012-02-01 Thread Peter S
Hi Harsh,

Thanks a lot for the reply.  Some questions are inline.

On Wed, Feb 1, 2012 at 8:31 PM, Harsh J  wrote:

> Peter,
>
> (Inline)
>
> On Thu, Feb 2, 2012 at 3:06 AM, Peter S  wrote:
> > I realize the version numbers are actually different branches, e.g.
> > branch-0.21, branch-0.22, branch-0.23, and trunk. I am wondering which
> > branch should I use and develop on? Which one is most stable?
>
> All new work is to go to trunk (presently numbered as 0.24).

Then how about 1.0.*?  It seems in the beta version. But why trunk number
is still 0.24?


>


> The page at http://wiki.apache.org/hadoop/HowToContribute goes over
> setting up the trunk development workspace and should help you get
> started.
>
> What do you mean by "most stable"? Do you ask in terms of build stability?
>
I mean in production.  From the release page:
"
0.20.X - legacy stable version
0.20.203.X - current stable version
1.0.X - current beta version, 1.0 release
0.22.X - does not include security
0.23.X - current alpha version
"
I don't quite get this. Why 0.23.X is in alpha version, but 1.0.X is in
beta version? And the 0.22.X doesn't include security. How does this
release number work?

For Cloudera version, will all the patches in CDH also in 1.0.X?



> > Does trunk include all patches in other branches such as branch-0.21?
>
> Yes, the branches are progressive.
>
> >
> > For hadoop common repo, which version should I use for a particular hdfs
> > version?
>
> Don't quite get this question.
>
Because common repo and hdfs repo is separate, is it possible to have some
incompatibility between the two.


>
> > Also, I realize the git repo is not updated on time when code is
> committed
> > to the svn repo. Should I use the git repo for dev or build a local git
> > repo on top of svn repo instead?
>
> Using a slightly-lagged git mirror or the subversion repository is
> your choice -- I prefer to use the former while working. The only part
> to remember about using git is to use the --no-prefix operator in
> git-diff command while generating patches for contribution via the
> JIRA.
>
Got it. Thanks a  lot.

>
> --
> Harsh J
> Customer Ops. Engineer
> Cloudera | http://tiny.cloudera.com/about
>


Re: a newbie question

2012-02-01 Thread Harsh J
Hi,

On Thu, Feb 2, 2012 at 10:15 AM, Peter S  wrote:
> Hi Harsh,
>
> Thanks a lot for the reply.  Some questions are inline.
>
> On Wed, Feb 1, 2012 at 8:31 PM, Harsh J  wrote:
>
>> Peter,
>>
>> (Inline)
>>
>> On Thu, Feb 2, 2012 at 3:06 AM, Peter S  wrote:
>> > I realize the version numbers are actually different branches, e.g.
>> > branch-0.21, branch-0.22, branch-0.23, and trunk. I am wondering which
>> > branch should I use and develop on? Which one is most stable?
>>
>> All new work is to go to trunk (presently numbered as 0.24).
>
> Then how about 1.0.*?  It seems in the beta version. But why trunk number
> is still 0.24?

1.0 is a rename of the 0.20 line, its not > 0.21+. Other branches will
be renamed eventually, but the rename has not happened yet. Agree its
confusing, but perhaps the illustrations on
http://www.cloudera.com/blog/2012/01/an-update-on-apache-hadoop-1-0/
and https://blogs.apache.org/bigtop/entry/all_you_wanted_to_know will
help you understand completely.

>> The page at http://wiki.apache.org/hadoop/HowToContribute goes over
>> setting up the trunk development workspace and should help you get
>> started.
>>
>> What do you mean by "most stable"? Do you ask in terms of build stability?
>>
> I mean in production.  From the release page:
> "
> 0.20.X - legacy stable version
> 0.20.203.X - current stable version
> 1.0.X - current beta version, 1.0 release
> 0.22.X - does not include security
> 0.23.X - current alpha version
> "
> I don't quite get this. Why 0.23.X is in alpha version, but 1.0.X is in
> beta version? And the 0.22.X doesn't include security. How does this
> release number work?
>
> For Cloudera version, will all the patches in CDH also in 1.0.X?

I think my previous comment should cover these questions. Its a
chaotic series of numbers after the rename happened partially, but the
numbering will stabilize soon.

Also, best to ask CDH specific questions over its active user forums
at https://groups.google.com/a/cloudera.org/group/cdh-user/topics

>> > For hadoop common repo, which version should I use for a particular hdfs
>> > version?
>>
>> Don't quite get this question.
>>
> Because common repo and hdfs repo is separate, is it possible to have some
> incompatibility between the two.

The project was all merged back recently, you only need to use the
hadoop-common repository:

i.e. Use http://svn.apache.org/repos/asf/hadoop/common/trunk/ (or)
https://github.com/apache/hadoop-common /
http://git.apache.org/hadoop-common.git

-- 
Harsh J
Customer Ops. Engineer
Cloudera | http://tiny.cloudera.com/about


Re: a newbie question

2012-02-01 Thread Peter S
Clear enough for me to understand.

Thanks a lot, Harsh!

Peter

On Wed, Feb 1, 2012 at 9:05 PM, Harsh J  wrote:

> Hi,
>
> On Thu, Feb 2, 2012 at 10:15 AM, Peter S  wrote:
> > Hi Harsh,
> >
> > Thanks a lot for the reply.  Some questions are inline.
> >
> > On Wed, Feb 1, 2012 at 8:31 PM, Harsh J  wrote:
> >
> >> Peter,
> >>
> >> (Inline)
> >>
> >> On Thu, Feb 2, 2012 at 3:06 AM, Peter S  wrote:
> >> > I realize the version numbers are actually different branches, e.g.
> >> > branch-0.21, branch-0.22, branch-0.23, and trunk. I am wondering which
> >> > branch should I use and develop on? Which one is most stable?
> >>
> >> All new work is to go to trunk (presently numbered as 0.24).
> >
> > Then how about 1.0.*?  It seems in the beta version. But why trunk number
> > is still 0.24?
>
> 1.0 is a rename of the 0.20 line, its not > 0.21+. Other branches will
> be renamed eventually, but the rename has not happened yet. Agree its
> confusing, but perhaps the illustrations on
> http://www.cloudera.com/blog/2012/01/an-update-on-apache-hadoop-1-0/
> and https://blogs.apache.org/bigtop/entry/all_you_wanted_to_know will
> help you understand completely.
>
> >> The page at http://wiki.apache.org/hadoop/HowToContribute goes over
> >> setting up the trunk development workspace and should help you get
> >> started.
> >>
> >> What do you mean by "most stable"? Do you ask in terms of build
> stability?
> >>
> > I mean in production.  From the release page:
> > "
> > 0.20.X - legacy stable version
> > 0.20.203.X - current stable version
> > 1.0.X - current beta version, 1.0 release
> > 0.22.X - does not include security
> > 0.23.X - current alpha version
> > "
> > I don't quite get this. Why 0.23.X is in alpha version, but 1.0.X is in
> > beta version? And the 0.22.X doesn't include security. How does this
> > release number work?
> >
> > For Cloudera version, will all the patches in CDH also in 1.0.X?
>
> I think my previous comment should cover these questions. Its a
> chaotic series of numbers after the rename happened partially, but the
> numbering will stabilize soon.
>
> Also, best to ask CDH specific questions over its active user forums
> at https://groups.google.com/a/cloudera.org/group/cdh-user/topics
>
> >> > For hadoop common repo, which version should I use for a particular
> hdfs
> >> > version?
> >>
> >> Don't quite get this question.
> >>
> > Because common repo and hdfs repo is separate, is it possible to have
> some
> > incompatibility between the two.
>
> The project was all merged back recently, you only need to use the
> hadoop-common repository:
>
> i.e. Use http://svn.apache.org/repos/asf/hadoop/common/trunk/ (or)
> https://github.com/apache/hadoop-common /
> http://git.apache.org/hadoop-common.git
>
> --
> Harsh J
> Customer Ops. Engineer
> Cloudera | http://tiny.cloudera.com/about
>