[jira] [Created] (HDFS-4223) Browsing filesystem from specific datanode in live nodes page also should include delegation token in the url

2012-11-23 Thread Vinay (JIRA)
Vinay created HDFS-4223:
---

 Summary: Browsing filesystem from specific datanode in live nodes 
page also should include delegation token in the url
 Key: HDFS-4223
 URL: https://issues.apache.org/jira/browse/HDFS-4223
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha, 3.0.0
Reporter: Vinay
Assignee: Vinay


Browsing file system from the 'Browse the filesystem' link includes 
'tokenString' as a parameter in the URL.

Same way browsing using specific datanode from live nodes page also should 
include 'tokenString' as a parameter to avoid following exception

{noformat}javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Jenkins build became unstable: Hadoop-Hdfs-0.23-Build #444

2012-11-23 Thread Apache Jenkins Server
See 



Hadoop-Hdfs-0.23-Build - Build # 444 - Unstable

2012-11-23 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/444/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 12266 lines...]
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-dependency-plugin:2.1:build-classpath (build-classpath) @ 
hadoop-hdfs-project ---
[INFO] Wrote classpath file 
'/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/target/classes/mrapp-generated-classpath'.
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
hadoop-hdfs-project ---
[INFO] Installing 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-hdfs-project/0.23.6-SNAPSHOT/hadoop-hdfs-project-0.23.6-SNAPSHOT.pom
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-dependency-plugin:2.1:build-classpath (build-classpath) @ 
hadoop-hdfs-project ---
[INFO] Skipped writing classpath file 
'/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/target/classes/mrapp-generated-classpath'.
  No changes found.
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [5:26.328s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [49.193s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.058s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 6:16.175s
[INFO] Finished at: Fri Nov 23 11:43:17 UTC 2012
[INFO] Final Memory: 53M/746M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###
## FAILED TESTS (if any) 
##
1 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.TestDFSClientRetries.testDFSClientRetriesOnBusyBlocks

Error Message:
Something wrong! Test 4 got Exception with maxmum retries!

Stack Trace:
junit.framework.AssertionFailedError: Something wrong! Test 4 got Exception 
with maxmum retries!
at junit.framework.Assert.fail(Assert.java:47)
at junit.framework.Assert.assertTrue(Assert.java:20)
at 
org.apache.hadoop.hdfs.TestDFSClientRetries.__CLR3_0_2mjmv27x9y(TestDFSClientRetries.java:420)
at 
org.apache.hadoop.hdfs.TestDFSClientRetries.testDFSClientRetriesOnBusyBlocks(TestDFSClientRetries.java:351)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110

Is it possible to read a corrupted Sequence File?

2012-11-23 Thread Hs
Hi,

I am running hadoop 1.0.3 and hbase-0.94.0on a 12-node cluster. For unknown
operational faults, 6 datanodes  have suffered a complete data loss(hdfs
data directory gone).  When I restart hadoop, it reports "*The ratio of
reported blocks 0.8252*".

I have a folder in hdfs containing many important files in hadoop
SequenceFile format. The hadoop fsck tool shows that  (in this folder)

Total size:134867556461 B
 Total dirs:16
 Total files:   251
 Total blocks (validated):  2136 (avg. block size 63140241 B)
  
  CORRUPT FILES:167
  MISSING BLOCKS:   405
  MISSING SIZE: 25819446263 B
  CORRUPT BLOCKS:   405
  

I wonder if I can read these corrupted SequenceFiles with missing blocks
skipped ?  Or, what else can I do now to recover these SequenceFiles as
much as possible ?

Please save me.

Thanks !


Re: Is it possible to read a corrupted Sequence File?

2012-11-23 Thread Radim Kolar



I wonder if I can read these corrupted SequenceFiles with missing blocks
skipped ?

Its possible to recover existing blocks and repair seq file structure.


Re: Is it possible to read a corrupted Sequence File?

2012-11-23 Thread Hs
Could you please provide a little more detail?  Should I run "hadoop fsck
 / -move " first to move broken files into /lost+found and then repair
them? Or, I can repair them directly in current path?
Thanks!


2012/11/24 Radim Kolar 

>
>  I wonder if I can read these corrupted SequenceFiles with missing blocks
>> skipped ?
>>
> Its possible to recover existing blocks and repair seq file structure.
>