[jira] [Resolved] (HDFS-2675) Reduce verbosity when double-closing edit logs

2011-12-14 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-2675.
---

  Resolution: Fixed
   Fix Version/s: 0.23.1
  0.24.0
Target Version/s: 0.24.0, 0.23.1  (was: 0.23.1, 0.24.0)
Hadoop Flags: Reviewed

Committed to 23 and trunk, thx

> Reduce verbosity when double-closing edit logs
> --
>
> Key: HDFS-2675
> URL: https://issues.apache.org/jira/browse/HDFS-2675
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Trivial
> Fix For: 0.24.0, 0.23.1
>
> Attachments: hdfs-2675.txt
>
>
> Currently the edit logs log at WARN level when they're double-closed. But 
> this happens in the normal flow of things, so we may as well reduce it to 
> DEBUG to reduce log spam in unit tests, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2680) DFSClient should construct failover proxy with exponential backoff

2011-12-14 Thread Todd Lipcon (Created) (JIRA)
DFSClient should construct failover proxy with exponential backoff
--

 Key: HDFS-2680
 URL: https://issues.apache.org/jira/browse/HDFS-2680
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, hdfs client
Affects Versions: HA branch (HDFS-1623)
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor


HADOOP-7896 adds facilities in common for exponential backoff when failing back 
and forth between NNs. We need to use the new capability from DFSClient when we 
construct the proxy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2664) Remove TestDFSOverAvroRpc

2011-12-14 Thread Suresh Srinivas (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HDFS-2664.
---

Resolution: Invalid

HDFS-2676 removed Avro RPC. This jira is no longer valid.

> Remove TestDFSOverAvroRpc
> -
>
> Key: HDFS-2664
> URL: https://issues.apache.org/jira/browse/HDFS-2664
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Fix For: 0.24.0
>
>
> With HDFS-2647, HDFS has transitioned to protocol buffers. The server side 
> implementation registers PB.class and a BlockingService as 
> implementation. Client side uses PB.class as the interface. The RPC 
> engine used is protobuf both for the RPC proxy and the server. With this 
> TestDFSOverAvroRpc fails. I propose removing this test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2681) Add ZK client for leader election

2011-12-14 Thread Suresh Srinivas (Created) (JIRA)
Add ZK client for leader election
-

 Key: HDFS-2681
 URL: https://issues.apache.org/jira/browse/HDFS-2681
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HA branch (HDFS-1623)
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas


ZKClient needs to support the following capabilities:
# Ability to create a znode for co-ordinating leader election.
# Ability to monitor and receive call backs when active znode status changes.
# Ability to get information about the active node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2671) HA: NN should throw StandbyException in response to RPCs in STANDBY state

2011-12-14 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-2671.
---

   Resolution: Fixed
Fix Version/s: HA branch (HDFS-1623)
 Hadoop Flags: Reviewed

thanks, committed to branch

> HA: NN should throw StandbyException in response to RPCs in STANDBY state
> -
>
> Key: HDFS-2671
> URL: https://issues.apache.org/jira/browse/HDFS-2671
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: HA branch (HDFS-1623)
>
> Attachments: hdfs-2671.txt
>
>
> Currently the NN is throwing UnsupportedActionException when it is hit with 
> RPCs while in Standby. This is what the StandbyException class is meant for. 
> The wrong type is preventing client failover from working as designed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2682) HA: When a FailoverProxyProvider is used, Client should not retry for 45 times(hard coded value) if it is timing out to connect to server.

2011-12-14 Thread Uma Maheswara Rao G (Created) (JIRA)
HA:  When a FailoverProxyProvider is used, Client should not retry for 45 
times(hard coded value)  if it is timing out to connect to server.


 Key: HDFS-2682
 URL: https://issues.apache.org/jira/browse/HDFS-2682
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G



If Clients are getting SocketTimeoutException, when it is trying to connect to 
Server, it will retry for 45 times, to rethrow the exception to RetryPolicy.

I think we can make this 45 retry times to configurable and set it to lower 
value when FailoverProxyProvider is used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Jenkins build is still unstable: Hadoop-Hdfs-0.23-Build #107

2011-12-14 Thread Apache Jenkins Server
See 




Hadoop-Hdfs-0.23-Build - Build # 107 - Still Unstable

2011-12-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/107/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 13559 lines...]
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (dist) @ hadoop-hdfs-httpfs ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/downloads
  [get] Getting: 
http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.32/bin/apache-tomcat-6.0.32.tar.gz
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/downloads/tomcat.tar.gz
.
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/tomcat.exp
 [exec] 
 [exec] gzip: stdin: unexpected end of file
 [exec] tar: Unexpected EOF in archive
 [exec] tar: Unexpected EOF in archive
 [exec] tar: Error is not recoverable: exiting now
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [6:41.690s]
[INFO] Apache Hadoop HttpFS .. FAILURE [11.966s]
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 6:54.095s
[INFO] Finished at: Wed Dec 14 11:41:39 UTC 2011
[INFO] Final Memory: 73M/762M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.6:run (dist) on project 
hadoop-hdfs-httpfs: An Ant BuildException has occured: exec returned: 2 -> 
[Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-httpfs
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Publishing Clover coverage report...
Publishing Clover HTML report...
Publishing Clover XML report...
Publishing Clover coverage results...
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Updating HDFS-2545
Updating MAPREDUCE-3426
Updating HDFS-2675
Updating MAPREDUCE-3544
Updating HDFS-2649
Updating MAPREDUCE-2863
Updating MAPREDUCE-2950
Updating HADOOP-7810
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  org.apache.hadoop.fs.http.server.TestHttpFSServer.instrumentation

Error Message:
expected:<401> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<401> but was:<200>
at junit.framework.Assert.fail(Assert.java:47)
at junit.framework.Assert.failNotEquals(Assert.java:283)
at junit.framework.Assert.assertEquals(Assert.java:64)
at junit.framework.Assert.assertEquals(Assert.java:195)
at junit.framework.Assert.assertEquals(Assert.java:201)
at 
org.apache.hadoop.fs.http.server.TestHttpFSServer.__CLR3_0_2bnqkmt1wc(TestHttpFSServer.java:97)
at 
org.apache.hadoop.fs.http.server.TestHttpFSServer.instrumentation(TestHttpFSServer.java:86)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable

Jenkins build is unstable: Hadoop-Hdfs-trunk #894

2011-12-14 Thread Apache Jenkins Server
See 




Hadoop-Hdfs-trunk - Build # 894 - Still unstable

2011-12-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/894/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 12545 lines...]
[INFO] Compiling 2 source files to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/test-classes
[INFO] 
[INFO] --- maven-surefire-plugin:2.10:test (default-test) @ 
hadoop-hdfs-bkjournal ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default) @ 
hadoop-hdfs-bkjournal ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is true
[INFO] ** FindBugsMojo executeFindbugs ***
[INFO] Temp File is 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/findbugsTemp.xml
[INFO] Fork Value is true
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS 
[2:07:40.168s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [35.385s]
[INFO] Apache Hadoop HDFS BookKeeper Journal . FAILURE [1.440s]
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 2:08:17.452s
[INFO] Finished at: Wed Dec 14 13:43:03 UTC 2011
[INFO] Final Memory: 87M/731M
[INFO] 
[ERROR] Could not find resource 
'/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/dev-support/findbugsExcludeFile.xml'.
 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/ResourceNotFoundException
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Updating HDFS-2545
Updating MAPREDUCE-3426
Updating HADOOP-7892
Updating MAPREDUCE-2863
Updating MAPREDUCE-3542
Updating HDFS-2676
Updating HDFS-2663
Updating HADOOP-7920
Updating HDFS-2675
Updating HDFS-2661
Updating MAPREDUCE-3545
Updating MAPREDUCE-3544
Updating HDFS-234
Updating HDFS-2649
Updating HADOOP-7810
Updating HDFS-2650
Updating HDFS-2669
Updating HDFS-2666
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###
## FAILED TESTS (if any) 
##
8 tests failed.
FAILED:  org.apache.hadoop.fs.http.server.TestHttpFSServer.instrumentation

Error Message:
expected:<401> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<401> but was:<200>
at junit.framework.Assert.fail(Assert.java:47)
at junit.framework.Assert.failNotEquals(Assert.java:283)
at junit.framework.Assert.assertEquals(Assert.java:64)
at junit.framework.Assert.assertEquals(Assert.java:195)
at junit.framework.Assert.assertEquals(Assert.java:201)
at 
org.apache.hadoop.fs.http.server.TestHttpFSServer.__CLR3_0_2bnqkmt1wc(TestHttpFSServer.java:97)
at 
org.apache.hadoop.fs.http.server.TestHttpFSServer.instrumentation(TestHttpFSServer.java:86)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.apache.hadoop.test.TestHdfsHelper$HdfsStatement.evaluate(TestHdfsHelper.java:73)
at 
org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirHelper.java:108)
at 
org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirH

[jira] [Resolved] (HDFS-2680) DFSClient should construct failover proxy with exponential backoff

2011-12-14 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-2680.
---

   Resolution: Fixed
Fix Version/s: HA branch (HDFS-1623)
 Hadoop Flags: Reviewed

Thanks, committed to HA branch.

> DFSClient should construct failover proxy with exponential backoff
> --
>
> Key: HDFS-2680
> URL: https://issues.apache.org/jira/browse/HDFS-2680
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, hdfs client
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Fix For: HA branch (HDFS-1623)
>
> Attachments: hdfs-2680.txt
>
>
> HADOOP-7896 adds facilities in common for exponential backoff when failing 
> back and forth between NNs. We need to use the new capability from DFSClient 
> when we construct the proxy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2683) Authority-based lookup of proxy provider fails if path becomes canonicalized

2011-12-14 Thread Todd Lipcon (Created) (JIRA)
Authority-based lookup of proxy provider fails if path becomes canonicalized


 Key: HDFS-2683
 URL: https://issues.apache.org/jira/browse/HDFS-2683
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, hdfs client
Affects Versions: HA branch (HDFS-1623)
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Critical


When testing MapReduce on top of an HA cluster we ran into the following bug: 
some uses of HDFS paths go through a canonicalization step which ensures that 
the authority component in the URI includes a port number. So our 
hdfs://logical-nn-uri/foo path turned into hdfs://logical-nn-uri:8020/foo. The 
code which looks up the failover proxy provider then failed to find the 
associated config. We should only compare the hostname portion of the URI when 
looking up proxy providers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2683) Authority-based lookup of proxy provider fails if path becomes canonicalized

2011-12-14 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-2683.
---

   Resolution: Fixed
Fix Version/s: HA branch (HDFS-1623)
 Hadoop Flags: Reviewed

Thanks, committed to branch

> Authority-based lookup of proxy provider fails if path becomes canonicalized
> 
>
> Key: HDFS-2683
> URL: https://issues.apache.org/jira/browse/HDFS-2683
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, hdfs client
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: HA branch (HDFS-1623)
>
> Attachments: hdfs-2683.txt, hdfs-2683.txt
>
>
> When testing MapReduce on top of an HA cluster we ran into the following bug: 
> some uses of HDFS paths go through a canonicalization step which ensures that 
> the authority component in the URI includes a port number. So our 
> hdfs://logical-nn-uri/foo path turned into hdfs://logical-nn-uri:8020/foo. 
> The code which looks up the failover proxy provider then failed to find the 
> associated config. We should only compare the hostname portion of the URI 
> when looking up proxy providers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2684) Fix up some failing unit tests on HA branch

2011-12-14 Thread Todd Lipcon (Created) (JIRA)
Fix up some failing unit tests on HA branch
---

 Key: HDFS-2684
 URL: https://issues.apache.org/jira/browse/HDFS-2684
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, test
Affects Versions: HA branch (HDFS-1623)
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Critical


To keep moving quickly on the HA branch, we've committed some stuff even though 
some unit tests are failing. This JIRA is to take a pass through the failing 
unit tests and get back to green (or close to it). If anything turns out to be 
a major amount of work I'll file separate JIRAs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




test failures on trunk

2011-12-14 Thread Eli Collins
Hey gang,

Looks like a number of the trunk tests
(https://builds.apache.org/job/Hadoop-Hdfs-trunk) started failing on
the 10th, due to the following. Ring a bell?  Maybe due to all the
recent protocol changes?

Error Message

org.apache.hadoop.hdfs.protocol.HdfsFileStatus cannot be cast to
org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus
Stacktrace

java.lang.ClassCastException:
org.apache.hadoop.hdfs.protocol.HdfsFileStatus cannot be cast to
org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus
at 
org.apache.hadoop.hdfs.DistributedFileSystem$1.hasNext(DistributedFileSystem.java:452)
at org.apache.hadoop.fs.FileSystem$5.hasNext(FileSystem.java:1551)
at org.apache.hadoop.fs.FileSystem$5.next(FileSystem.java:1581)
at org.apache.hadoop.fs.FileSystem$5.next(FileSystem.java:1541)
at 
org.apache.hadoop.fs.TestListFiles.testDirectory(TestListFiles.java:146)

Thanks,
Eli


[jira] [Created] (HDFS-2685) hadoop fs -ls globbing gives inconsistent exit code

2011-12-14 Thread Mitesh Singh Jat (Created) (JIRA)
hadoop fs -ls globbing gives inconsistent exit code
---

 Key: HDFS-2685
 URL: https://issues.apache.org/jira/browse/HDFS-2685
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.205.0, 0.20.204.0, 0.20.2
Reporter: Mitesh Singh Jat


_hadoop fs -ls_ command gives exit code for globbed input path, which is the 
exit code for the last resolved absolute path. Whereas _ls_ command always give 
same exit code regardless of position of non-existent path in globbing.

{code:bash}$ hadoop fs -mkdir input/20110{1,2,3}/{A,B,C,D}/{1,2} {code}

Since directory 'input/201104/' is not present, the following command gives 255 
as exit code.
{code:bash}$ hadoop fs -ls input/20110{1,2,3,4}/ ; echo $? {code}
{noformat}
Found 4 items
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201101/A
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201101/B
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201101/C
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201101/D
Found 4 items
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201102/A
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201102/B
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201102/C
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201102/D
Found 4 items
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201103/A
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201103/B
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201103/C
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201103/D
ls: Cannot access input/201104/: No such file or directory.
{noformat}
{color:red}255{color}


The directory 'input/201104/' is not present but given as second last parameter 
in globbing.
The following command gives 0 as exit code, because directory 'input/201103/' 
is present.
{code:bash}$ hadoop fs -ls input/20110{1,2,4,3}/ ; echo $? {code}
{noformat}
Found 4 items
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201101/A
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201101/B
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201101/C
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201101/D
Found 4 items
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201102/A
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201102/B
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201102/C
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201102/D
ls: Cannot access input/201104/: No such file or directory.
Found 4 items
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201103/A
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201103/B
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201103/C
drwxr-xr-x   - mitesh supergroup  0 2011-12-15 11:51 
/user/mitesh/input/201103/D
{noformat}
{color:green}0{color}


Whereas, on Linux, ls command gives non-zero(2) as exit code, irrespective of 
position of non-existent path in globbing.
{code:bash}$ mkdir -p input/20110{1,2,3,4}/{A,B,C,D}/{1,2} {code}


{code:bash}$ ls input/20110{1,2,4,3}/ ; echo $? {code}
{noformat}
/bin/ls: input/201104/: No such file or directory
input/201101/:
./  ../  A/  B/  C/  D/

input/201102/:
./  ../  A/  B/  C/  D/

input/201103/:
./  ../  A/  B/  C/  D/
{noformat}
{color:red}2{color}


{code:bash}$ ls input/20110{1,2,3,4}/ ; echo $? {code}
{noformat}
/bin/ls: input/201104/: No such file or directory
input/201101/:
./  ../  A/  B/  C/  D/

input/201102/:
./  ../  A/  B/  C/  D/

input/201103/:
./  ../  A/  B/  C/  D/
{noformat}
{color:red}2{color}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira