HDFS tests failing on trunk for anyone but me?

2012-02-10 Thread Steve Loughran
I'm seeing HDFS tests failing on a clean trunk checked out today. Is 
anyone else seeing this?



Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 1.14 sec 
<<< FAILURE!


Running org.apache.hadoop.hdfs.server.datanode.TestInterDatanodeProtocol
Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 6.455 
sec <<< FAILURE!


Running org.apache.hadoop.hdfs.server.datanode.TestReplicasMap
Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 0.068 
sec <<< FAILURE!


I know TestReplicasMap has been troubled in the past (HDFS-2346), but a 
quick search doesn't throw up anything open


[jira] [Resolved] (HDFS-502) hudson ignored a test failure while generating the junit test report.

2012-02-10 Thread Steve Loughran (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-502.
-

  Resolution: Cannot Reproduce
Release Note: I'm going to close this as can't reproduce as I do see 
failures in this test propagating to reports, and as we've moved on to jenkins 
and Ant this bug will be impossible to replicate

> hudson ignored a test failure while generating the junit test report.
> -
>
> Key: HDFS-502
> URL: https://issues.apache.org/jira/browse/HDFS-502
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Giridharan Kesavan
>
> http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/25/console
> console test logs show a test failure.
> exec] [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 3.745 sec
> [exec] [junit] Test 
> org.apache.hadoop.hdfs.server.datanode.TestInterDatanodeProtocol FAILED
> TestInterDatanodeProtocol test failed for some reason; and hudson while 
> parsing the xml results didn't consider the test failure.
> http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/25/testReport/
> I'm not sure if this is something to do with hudson junit plugin or  with 
> hudson itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HDFS-2922) HA: close out operation categories

2012-02-10 Thread Todd Lipcon (Reopened) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reopened HDFS-2922:
---


This seems to have broken a bunch of tests since getDatanodeReport now throws 
an error on the standby, so MiniDFSCluster won't start an HA cluster anymore. 
I'm going to revert for now to get tests passing again on branch

> HA: close out operation categories
> --
>
> Key: HDFS-2922
> URL: https://issues.apache.org/jira/browse/HDFS-2922
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-2922.txt, hdfs-2922.txt
>
>
> We need to close out the NN operations categories.
> The following operations should be left as is, ie not failover, as it's 
> reasonable to call these on a standby, and we just need to update the TODO 
> with a comment:
> - {{setSafeMode}} (Might want to force the standby out of safemode)
> - {{restoreFailedStorage}} (Might want to tell the standby to restore the 
> shared edits dir)
> - {{saveNamespace}}, {{metaSave}} (Could imagine calling these on a standby 
> eg in a recovery scenario)
> - {{refreshNodes}} (Decommissioning needs to refresh the standby)
> The following operations should be checked for READ, as neither should need 
> to be called on standby, will failover unless stale reads are enabled:
> - {{getTransactionID}}, {{getEditLogManifest}} (we don't checkoint the 
> standby)
> The following operations should be checked for WRITE, as they should not be 
> called on a standby, ie should always failover:
> - {{finalizeUpgrade}}, {{distributedUpgradeProgress}} (should not be able to 
> upgrade the standby)
> - {{setBalancerBandwidth}} (balancer should failover)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2936) Provide a better way to specify a HDFS-wide minimum replication requirement

2012-02-10 Thread Harsh J (Created) (JIRA)
Provide a better way to specify a HDFS-wide minimum replication requirement
---

 Key: HDFS-2936
 URL: https://issues.apache.org/jira/browse/HDFS-2936
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J


Currently, if an admin would like to enforce a replication factor for all files 
on his HDFS, he does not have a way. He may arguably set dfs.replication.min 
but that is a very hard guarantee and if the pipeline can't afford that number 
for some reason/failure, the close() does not succeed on the file being written 
and leads to several issues.

After discussing with Todd, we feel it would make sense to introduce a second 
config (which is ${dfs.replication.min} by default) which would act as a 
minimum specified replication for files. This is different than 
dfs.replication.min which also ensures that many replicas are recorded before 
completeFile() returns... perhaps something like ${dfs.replication.min.user}. 
We can leave dfs.replication.min alone for hard-guarantees and add 
${dfs.replication.min.for.block.completion} which could be left at 1 even if 
dfs.replication.min is >1, and let files complete normally but not be of a low 
replication factor (so can be monitored and accounted-for later).

I'm prefering the second option myself. Will post a patch with tests soon.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2862) Datanode classes should not directly use FSDataset

2012-02-10 Thread Tsz Wo (Nicholas), SZE (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-2862.
--

Resolution: Fixed

The sub-tasks fixed the problem.  FSDataset implementation is not directly 
referred in datanode classes.  Closing this.

> Datanode classes should not directly use FSDataset
> --
>
> Key: HDFS-2862
> URL: https://issues.apache.org/jira/browse/HDFS-2862
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>
> Datanode classes should only use the methods defined in FSDatasetInterface 
> but not directly use FSDataset.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2558) Federation docs do not describe how to enable mount side tables

2012-02-10 Thread Tsz Wo (Nicholas), SZE (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-2558.
--

Resolution: Duplicate

> Federation docs do not describe how to enable mount side tables
> ---
>
> Key: HDFS-2558
> URL: https://issues.apache.org/jira/browse/HDFS-2558
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.23.0
>Reporter: Araceli Henley
>Assignee: Suresh Srinivas
>Priority: Minor
>
> http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/Federation.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2060) DFS client RPCs using protobufs

2012-02-10 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-2060.
---

Resolution: Duplicate

This was fixed by other work to move RPC to protobuf

> DFS client RPCs using protobufs
> ---
>
> Key: HDFS-2060
> URL: https://issues.apache.org/jira/browse/HDFS-2060
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-2060-getblocklocations.txt
>
>
> The most important place for wire-compatibility in DFS is between clients and 
> the cluster, since lockstep upgrade is very difficult and a single client may 
> want to talk to multiple server versions. So, I'd like to focus this JIRA on 
> making the RPCs between the DFS client and the NN/DNs wire-compatible using 
> protocol buffer based serialization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2478) HDFS Protocols in Protocol Buffers

2012-02-10 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-2478.
---

Resolution: Duplicate

This was fixed by the other work to move IPC to protobufs

> HDFS Protocols in Protocol Buffers
> --
>
> Key: HDFS-2478
> URL: https://issues.apache.org/jira/browse/HDFS-2478
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2802) Support for RW/RO snapshots in HDFS

2012-02-10 Thread Jeff Hammerbacher (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Hammerbacher resolved HDFS-2802.
-

Resolution: Duplicate

There's a ton of useful discussion on the HDFS-233 ticket that should be 
preserved when working on this issue. Please continue the discussion there 
rather than duplicating the issue.

> Support for RW/RO snapshots in HDFS
> ---
>
> Key: HDFS-2802
> URL: https://issues.apache.org/jira/browse/HDFS-2802
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: name-node
>Reporter: Hari Mankude
>Assignee: Hari Mankude
> Fix For: 0.24.0
>
>
> Snapshots are point in time images of parts of the filesystem or the entire 
> filesystem. Snapshots can be a read-only or a read-write point in time copy 
> of the filesystem. There are several use cases for snapshots in HDFS. I will 
> post a detailed write-up soon with with more information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HDFS-2802) Support for RW/RO snapshots in HDFS

2012-02-10 Thread Suresh Srinivas (Reopened) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas reopened HDFS-2802:
---


> Support for RW/RO snapshots in HDFS
> ---
>
> Key: HDFS-2802
> URL: https://issues.apache.org/jira/browse/HDFS-2802
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: name-node
>Reporter: Hari Mankude
>Assignee: Hari Mankude
> Fix For: 0.24.0
>
>
> Snapshots are point in time images of parts of the filesystem or the entire 
> filesystem. Snapshots can be a read-only or a read-write point in time copy 
> of the filesystem. There are several use cases for snapshots in HDFS. I will 
> post a detailed write-up soon with with more information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira