Propose a new feature for hadoop-hdfs-raid project

2012-07-11 Thread Eric Leung
Hi All,

The hadoop-hdfs-raid project is a wonderful scenario of decrease storage
overhead and take no influence of reliable at the same time.
At the present implementation, the cold data will be encoded following
different rules:
1) The coldest data will be encoded by Reed Solomon erasure code;
2) The data which is not very cold will be encoded by XOR code.
The encoding process is calculated by the blocks of the same file and these
files that need to be encoding is configured by raid.xml. It's an annoying
work if there are huge numbers of these files and the hotness of a file may
be changed as time.
I wonder whether there is a way to determinate the hotness of a certain
block automatically and encoding the blocks with the same hotness level
without distinguishing the concept of file.
Then I considered to implement this feature at hadoop-hdfs-raid project:
1) It does not need the administrator to configure the raid.xml file to
determinate which files should be encoding and the encoding scenarios. The
block selection process is finished automatically.
2) There is no concepts of file during the calculation of erasure code. It
means that the calculation of erasure code will not take a file for unit.
The calculation of erasure code will follow the hotness of the block
absolutely. And the blocks with the same hotness will be encoding at the
same time.

Pls give some comments and opinions about this plan.
How to start a new feature and contribute a patch in the open source hadoop
project.


Hadoop-Hdfs-trunk - Build # 1100 - Still Failing

2012-07-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1100/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 11645 lines...]
[INFO] Rendering site with org.apache.maven.skins:maven-stylus-skin:jar:1.2 
skin.
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default) @ hadoop-hdfs-raid ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is true
[INFO] ** FindBugsMojo executeFindbugs ***
[INFO] Temp File is 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/target/findbugsTemp.xml
[INFO] Fork Value is true
 [java] Warnings generated: 31
[INFO] xmlOutput is false
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (site) @ hadoop-hdfs-raid ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/target/docs-src
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [5:22.371s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [33.356s]
[INFO] Apache Hadoop HDFS BookKeeper Journal . SUCCESS [14.904s]
[INFO] Apache Hadoop HDFS Raid ... FAILURE [18.111s]
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 6:29.565s
[INFO] Finished at: Wed Jul 11 11:40:32 UTC 2012
[INFO] Final Memory: 56M/679M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project 
hadoop-hdfs-raid: An Ant BuildException has occured: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/docs
 does not exist. -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-raid
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-3555
Updating MAPREDUCE-3940
Updating HADOOP-8586
Updating HADOOP-8573
Updating HDFS-3568
Updating HADOOP-8584
Updating HADOOP-8423
Updating HADOOP-8587
Updating HDFS-3629
Updating HDFS-3613
Updating MAPREDUCE-4252
Updating HDFS-3611
Updating HADOOP-8525
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1100

2012-07-11 Thread Apache Jenkins Server
See 

Changes:

[eli] HADOOP-8586. Fixup a bunch of SPNEGO misspellings. Contributed by Eli 
Collins

[eli] Revert previous commit, accidentally included HADOOP-8587.

[eli] HADOOP-8586. Fixup a bunch of SPNEGO misspellings. Contributed by Eli 
Collins

[harsh] HDFS-3611. NameNode prints unnecessary WARNs about edit log normally 
skipping a few bytes. Contributed by Colin Patrick McCabe. (harsh)

[harsh] HDFS-3613. GSet prints some INFO level values, which aren't really very 
useful to all. Contributed by Andrew Wang. (harsh)

[vinodkv] MAPREDUCE-3940. ContainerTokens should have an expiry interval. 
Contributed by Siddharth Seth and Vinod Kumar Vavilapalli.

[harsh] HDFS-3629. Fix the typo in the error message about inconsistent storage 
layout version. Contributed by Brandon Li. (harsh)

[eli] Fixup CHANGES.txt

[eli] HADOOP-8584. test-patch.sh should not immediately exit when no tests are 
added or modified. Contributed by Colin Patrick McCabe

[tgraves] HADOOP-8573. Configuration tries to read from an inputstream resource 
multiple times (Robert Evans via tgraves)

[harsh] HADOOP-8423. MapFile.Reader.get() crashes jvm or throws EOFException on 
Snappy or LZO block-compressed data. Contributed by Todd Lipcon. (harsh)

[atm] HDFS-3568. fuse_dfs: add support for security. Contributed by Colin 
McCabe.

[bobby] HADOOP-8525. Provide Improved Traceability for Configuration (bobby)

[bobby] HADOOP-8525. Provide Improved Traceability for Configuration (bobby)

[bobby] MAPREDUCE-4252. MR2 job never completes with 1 pending task (Tom White 
via bobby)

[harsh] HDFS-3555. idle client socket triggers DN ERROR log (should be INFO or 
DEBUG). Contributed by Andy Isaacson. (harsh)

--
[...truncated 11452 lines...]
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-httpfs ---
[INFO] 
[INFO] There are 1598 checkstyle errors.
[WARNING] Unable to locate Source XRef to link to - DISABLED
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-httpfs ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is true
[INFO] ** FindBugsMojo executeFindbugs ***
[INFO] Temp File is 

[INFO] Fork Value is true
[INFO] xmlOutput is false
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS BookKeeper Journal 3.0.0-SNAPSHOT
[INFO] 
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, 
no dependency information available
[WARNING] Failed to retrieve plugin descriptor for 
org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin 
org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be 
resolved: Failed to read artifact descriptor for 
org.eclipse.m2e:lifecycle-mapping:jar:1.0.0
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ 
hadoop-hdfs-bkjournal ---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ 
hadoop-hdfs-bkjournal ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-dependency-plugin:2.1:build-classpath (build-classpath) @ 
hadoop-hdfs-bkjournal ---
[INFO] Wrote classpath file 
'
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
hadoop-hdfs-bkjournal ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
hadoop-hdfs-bkjournal ---
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
hadoop-hdfs-bkjournal ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hadoop-hdfs-bkjournal ---
[INFO] Compiling 8 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.12:test (default-test) @ 
hadoop-hdfs-bkjournal ---
[INFO] Tests are skipped.
[INFO] 
[INFO] 

Jenkins build became unstable: Hadoop-Hdfs-0.23-Build #310

2012-07-11 Thread Apache Jenkins Server
See 



Hadoop-Hdfs-0.23-Build - Build # 310 - Unstable

2012-07-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/310/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 14399 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
hadoop-hdfs-project ---
[INFO] Installing 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-hdfs-project/0.23.3-SNAPSHOT/hadoop-hdfs-project-0.23.3-SNAPSHOT.pom
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [11:40.218s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [1:30.831s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.129s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 13:12.443s
[INFO] Finished at: Wed Jul 11 11:48:25 UTC 2012
[INFO] Final Memory: 51M/725M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Updating HDFS-3486
Updating MAPREDUCE-4252
Updating MAPREDUCE-3940
Updating HADOOP-8573
Updating MAPREDUCE-3728
Updating MAPREDUCE-3992
Updating HADOOP-8525
Updating HADOOP-8242
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###
## FAILED TESTS (if any) 
##
5 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.testBlockCorruptionRecoveryPolicy1

Error Message:
Timed out waiting for corrupt replicas. Waiting for 1, but only found 0

Stack Trace:
java.util.concurrent.TimeoutException: Timed out waiting for corrupt replicas. 
Waiting for 1, but only found 0
at 
org.apache.hadoop.hdfs.DFSTestUtil.waitCorruptReplicas(DFSTestUtil.java:329)
at 
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.blockCorruptionRecoveryPolicy(TestDatanodeBlockScanner.java:288)
at 
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.__CLR3_0_2wadu2tz90(TestDatanodeBlockScanner.java:236)
at 
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.testBlockCorruptionRecoveryPolicy1(TestDatanodeBlockScanner.java:233)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JU

[jira] [Created] (HDFS-3637) Add support for encrypting the DataTransferProtocol

2012-07-11 Thread Aaron T. Myers (JIRA)
Aaron T. Myers created HDFS-3637:


 Summary: Add support for encrypting the DataTransferProtocol
 Key: HDFS-3637
 URL: https://issues.apache.org/jira/browse/HDFS-3637
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node, hdfs client, security
Affects Versions: 2.0.0-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers


Currently all HDFS RPCs performed by NNs/DNs/clients can be optionally 
encrypted. However, actual data read or written between DNs and clients (or DNs 
to DNs) is sent in the clear. When processing sensitive data on a shared 
cluster, confidentiality of the data read/written from/to HDFS may be desired.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3638) backport HDFS-3568 (add security to fuse_dfs) to branch-1

2012-07-11 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-3638:
--

 Summary: backport HDFS-3568 (add security to fuse_dfs) to branch-1
 Key: HDFS-3638
 URL: https://issues.apache.org/jira/browse/HDFS-3638
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 1.1.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


Backport HDFS-3568 to branch-1.  This will give fuse_dfs support for Kerberos 
authentication, allowing FUSE to be used in a secure cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3639) JspHelper#getUGI should always verify the token if security is enabled

2012-07-11 Thread Eli Collins (JIRA)
Eli Collins created HDFS-3639:
-

 Summary: JspHelper#getUGI should always verify the token if 
security is enabled
 Key: HDFS-3639
 URL: https://issues.apache.org/jira/browse/HDFS-3639
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor


JspHelper#getUGI on verifies the given token if the context and nn are set 
(added in HDFS-2416). We should unconditionally verifyToken the token, ie a bug 
where "name.node" is not set in the context object should not result in not 
verifying the token. In practice this shouldn't be an issue as per HDFS-3434 
the context and NN should never be null.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3640) Don't use Util#now or System#currentTimeMillis for calculating intervals

2012-07-11 Thread Eli Collins (JIRA)
Eli Collins created HDFS-3640:
-

 Summary: Don't use Util#now or System#currentTimeMillis for 
calculating intervals
 Key: HDFS-3640
 URL: https://issues.apache.org/jira/browse/HDFS-3640
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins


Per HDFS-3485 we shouldn't use Util#now or System#currentTimeMillis to 
calculate intervals as they can be affected by system clock changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3641) Move server Util time methods to common and use now instead of System#currentTimeMillis

2012-07-11 Thread Eli Collins (JIRA)
Eli Collins created HDFS-3641:
-

 Summary: Move server Util time methods to common and use now 
instead of System#currentTimeMillis
 Key: HDFS-3641
 URL: https://issues.apache.org/jira/browse/HDFS-3641
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor


To help HDFS-3640, let's move the time methods from the HDFS server Util class 
to common and use now instead of System#currentTimeMillis.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3642) TestReplication#testReplicationSimulatedStorag failed due to unexpected exit

2012-07-11 Thread Eli Collins (JIRA)
Eli Collins created HDFS-3642:
-

 Summary: TestReplication#testReplicationSimulatedStorag failed due 
to unexpected exit
 Key: HDFS-3642
 URL: https://issues.apache.org/jira/browse/HDFS-3642
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins


TestReplication#testReplicationSimulatedStorage failed in a recent jenkins run.

{noformat}
Failed tests:   
testReplicationSimulatedStorag(org.apache.hadoop.hdfs.TestReplication): Test 
resulted in an unexpected exit
  testReplication(org.apache.hadoop.hdfs.TestReplication): Test resulted in an 
unexpected exit
  testPendingReplicationRetry(org.apache.hadoop.hdfs.TestReplication): Test 
resulted in an unexpected exit
  testReplicateLenMismatchedBlock(org.apache.hadoop.hdfs.TestReplication): Test 
resulted in an unexpected exit
{noformat}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-3639) JspHelper#getUGI should always verify the token if security is enabled

2012-07-11 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins resolved HDFS-3639.
---

  Resolution: Fixed
   Fix Version/s: 2.0.1-alpha
  1.2.0
Target Version/s:   (was: 1.2.0, 2.0.1-alpha)
Hadoop Flags: Reviewed

Thanks for the review Daryn. I've committed this and merged to branch-2 and 
branch-1. 

> JspHelper#getUGI should always verify the token if security is enabled
> --
>
> Key: HDFS-3639
> URL: https://issues.apache.org/jira/browse/HDFS-3639
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.0, 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Minor
> Fix For: 1.2.0, 2.0.1-alpha
>
> Attachments: hdfs-3639-b1.txt, hdfs-3639.txt
>
>
> JspHelper#getUGI on verifies the given token if the context and nn are set 
> (added in HDFS-2416). We should unconditionally verifyToken the token, ie a 
> bug where "name.node" is not set in the context object should not result in 
> not verifying the token. In practice this shouldn't be an issue as per 
> HDFS-3434 the context and NN should never be null.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3643) hdfsJniHelper.c unchecked string pointers

2012-07-11 Thread Andy Isaacson (JIRA)
Andy Isaacson created HDFS-3643:
---

 Summary: hdfsJniHelper.c unchecked string pointers
 Key: HDFS-3643
 URL: https://issues.apache.org/jira/browse/HDFS-3643
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Andy Isaacson
Assignee: Andy Isaacson


{code}
str = methSignature;
while (*str != ')') str++;
str++;
returnType = *str;
{code}
This loop needs to check for {{'\0'}}. Also the following {{if/else if/else 
if}} cascade doesn't handle unexpected values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3644) OEV should recognize and deal with 0.20.20x opcode versions

2012-07-11 Thread Todd Lipcon (JIRA)
Todd Lipcon created HDFS-3644:
-

 Summary: OEV should recognize and deal with 0.20.20x opcode 
versions
 Key: HDFS-3644
 URL: https://issues.apache.org/jira/browse/HDFS-3644
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Todd Lipcon
Priority: Minor


We have some opcode conflicts for edit logs between 0.20.20x (LV -19, -31) vs 
newer versions. For edit log loading, we dealt with this by forcing users to 
save namespace on an earlier version before upgrading. But, using a trunk OEV 
on an older version is useful since the OEV has had so many improvements. It 
would be nice to be able to specify a flag to the OEV to be able to run on 
older edit logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira