Re: Regarding hsync

2013-07-11 Thread Hemant Bhanawat
Hi, 

Any help? 

Thanks in advance, 
Hemant 

- Original Message -

From: "Hemant Bhanawat"  
To: hdfs-dev@hadoop.apache.org 
Sent: Tuesday, July 9, 2013 12:55:23 PM 
Subject: Regarding hsync 

Hi, 

I am currently working on hadoop version 2.0.*. 

Currently, hsync does not update the file size on namenode. So, if my process 
dies after calling hsync but before calling file close, the file is left with 
an inconsistent file size. I would like to fix this file size. Is there a way 
to do that? A workaround that I have come across is to open the file stream in 
append mode and close it. This fixes the file size on the namenode. Is it a 
reliable solution? 

Thanks, 
Hemant 



Hadoop-Hdfs-trunk - Build # 1457 - Still Failing

2013-07-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1457/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 15180 lines...]
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS 
[1:37:59.489s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [2:23.171s]
[INFO] Apache Hadoop HDFS BookKeeper Journal . FAILURE [56.414s]
[INFO] Apache Hadoop HDFS-NFS  FAILURE [25.715s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.032s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:41:45.674s
[INFO] Finished at: Thu Jul 11 13:15:26 UTC 2013
[INFO] Final Memory: 54M/898M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on 
project hadoop-hdfs-bkjournal: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.6:checkstyle (default-cli) 
on project hadoop-hdfs-nfs: An error has occurred in Checkstyle report 
generation. Failed during checkstyle execution: Unable to find configuration 
file at location 
file:///home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/dev-support/checkstyle.xml:
 Could not find resource 
'file:///home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/dev-support/checkstyle.xml'.
 -> [Help 2]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] [Help 2] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-bkjournal
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HADOOP-9661
Updating HADOOP-9355
Updating HDFS-4962
Updating HDFS-4645
Updating HADOOP-9673
Updating YARN-883
Updating HDFS-4372
Updating HDFS-4969
Updating YARN-736
Updating HDFS-4908
Updating MAPREDUCE-5333
Updating MAPREDUCE-4374
Updating HDFS-4797
Updating YARN-569
Updating HADOOP-9414
Updating HDFS-4887
Updating YARN-368
Updating YARN-295
Updating HADOOP-9416
Updating YARN-866
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1457

2013-07-11 Thread Apache Jenkins Server
See 

Changes:

[cnauroth] HDFS-4372. Track NameNode startup progress. Contributed by Chris 
Nauroth.

[cdouglas] YARN-569. Add support for requesting and enforcing preemption 
requests via
a capacity monitor. Contributed by Carlo Curino, Chris Douglas

[tucu] updating CHANGES.txt after committing 
MAPREDUCE-5333,HADOOP-9661,HADOOP-9355,HADOOP-9673,HADOOP-9414,HADOOP-9416,HDFS-4797,YARN-866,YARN-736,YARN-883
 to 2.1-beta branch

[cnauroth] MAPREDUCE-4374. Fix child task environment variable config and add 
support for Windows. Contributed by Chuan Liu.

[szetszwo] In CHANGES.txt, move HDFS-4908 and HDFS-4645 to 2.1.0-beta.

[tucu] HDFS-4969. WebhdfsFileSystem expects non-standard WEBHDFS Json element. 
(rkanter via tucu)

[vinodkv] YARN-295. Fixed a race condition in ResourceManager RMAppAttempt 
state machine. Contributed by Mayank Bansal.

[vinodkv] YARN-368. Fixed a typo in error message in Auxiliary services. 
Contributed by Albert Chu.

[jing9] HDFS-4962. Use enum for nfs constants. Contributed by Tsz Wo (Nicholas) 
SZE.

[kihwal] HDFS-4887. TestNNThroughputBenchmark exits abruptly. Contributed by 
Kihwal Lee.

--
[...truncated 14987 lines...]
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.516 sec

Results :

Failed tests:   
testStandbyExceptionThrownDuringCheckpoint(org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints):
 SBN should have still been checkpointing.

Tests run: 32, Failures: 1, Errors: 0, Skipped: 0

[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS-NFS 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-nfs ---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-nfs ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 12 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 7 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.12.3:test (default-test) @ hadoop-hdfs-nfs 
---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.hdfs.nfs.nfs3.TestOffsetRange
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.057 sec
Running org.apache.hadoop.hdfs.nfs.nfs3.TestRpcProgramNfs3
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.058 sec
Running org.apache.hadoop.hdfs.nfs.nfs3.TestDFSClientCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.323 sec
Running org.apache.hadoop.hdfs.nfs.TestMountd
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.436 sec

Results :

Tests run: 8, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-jar-plugin:2.3.1:jar (prepare-jar) @ hadoop-hdfs-nfs ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-jar-plugin:2.3.1:test-jar (prepare-test-jar) @ hadoop-hdfs-nfs 
---
[INFO] Building jar: 

[INFO] 
[INFO] >>> maven-source-plugin:2.1.2

False Positives in Release audit problems report? - confirmation required

2013-07-11 Thread Vivek Ganesan

Hi,

I am on my way to contribute my first patch to hdfs.

My patch has the following files.

viv@viv-Ideapad-Z570:~/Work/hadoop-trunk$ svn stat
M 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeJsp.java
M 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeJspHelper.java



But, I get releaseAudit violations as follows in 
/tmp/patchReleaseAuditProblems.txt


 !? 
/home/viv/Work/hadoop-trunk/hadoop-hdfs-project/hadoop-hdfs/build/test/data/dfs/name1/current/VERSION
 !? 
/home/viv/Work/hadoop-trunk/hadoop-hdfs-project/hadoop-hdfs/build/test/data/dfs/name1/current/fsimage_000.md5
 !? 
/home/viv/Work/hadoop-trunk/hadoop-hdfs-project/hadoop-hdfs/build/test/data/dfs/name1/current/seen_txid
 !? 
/home/viv/Work/hadoop-trunk/hadoop-hdfs-project/hadoop-hdfs/build/test/data/dfs/name2/current/VERSION
 !? 
/home/viv/Work/hadoop-trunk/hadoop-hdfs-project/hadoop-hdfs/build/test/data/dfs/name2/current/fsimage_000.md5
 !? 
/home/viv/Work/hadoop-trunk/hadoop-hdfs-project/hadoop-hdfs/build/test/data/dfs/name2/current/seen_txid
Lines that start with ? in the release audit report indicate files 
that do not have an Apache license header.


Actually, the build directory is not source controlled and not a part of 
the patch.


Is this a false positive?

Thanks in advance.

Regards,
Vivek Ganesan


Re: Regarding hsync

2013-07-11 Thread Jing Zhao
Hi Hemant,

HDFS-4213 (https://issues.apache.org/jira/browse/HDFS-4213) may be
the one you're looking for. In general, an hsync which can update the
file size on NN is enabled. You may call
"hsync(EnumSet.of(SyncFlag.UPDATE_LENGTH))" to do that.

Thanks,
-Jing

On Thu, Jul 11, 2013 at 1:38 AM, Hemant Bhanawat  wrote:
> Hi,
>
> Any help?
>
> Thanks in advance,
> Hemant
>
> - Original Message -
>
> From: "Hemant Bhanawat" 
> To: hdfs-dev@hadoop.apache.org
> Sent: Tuesday, July 9, 2013 12:55:23 PM
> Subject: Regarding hsync
>
> Hi,
>
> I am currently working on hadoop version 2.0.*.
>
> Currently, hsync does not update the file size on namenode. So, if my process 
> dies after calling hsync but before calling file close, the file is left with 
> an inconsistent file size. I would like to fix this file size. Is there a way 
> to do that? A workaround that I have come across is to open the file stream 
> in append mode and close it. This fixes the file size on the namenode. Is it 
> a reliable solution?
>
> Thanks,
> Hemant
>


[jira] [Resolved] (HDFS-4975) Branch-1-win TestReplicationPolicy failed caused by stale data node handling

2013-07-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-4975.
-

  Resolution: Fixed
Target Version/s: 1-win
Hadoop Flags: Reviewed

+1 for the patch.  I tested successfully on Mac and Windows.  I committed this 
to branch-1-win.  Thank you for the patch, Xi.

> Branch-1-win TestReplicationPolicy failed caused by stale data node handling
> 
>
> Key: HDFS-4975
> URL: https://issues.apache.org/jira/browse/HDFS-4975
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Xi Fang
>Assignee: Xi Fang
> Fix For: 1-win
>
> Attachments: HADOOP-9714.1.patch, HDFS-4975.2.patch
>
>
> TestReplicationPolicy failed on 
> * testChooseTargetWithMoreThanAvailableNodes()
> * testChooseTargetWithStaleNodes()
> * testChooseTargetWithHalfStaleNodes()
> The root of cause of testChooseTargetWithMoreThanAvailableNodes failing is 
> the following:
> In BlockPlacementPolicyDefault#chooseTarget()
> {code}
>   chooseRandom(numOfReplicas, NodeBase.ROOT, excludedNodes, 
> blocksize, maxNodesPerRack, results);
> } catch (NotEnoughReplicasException e) {
>   FSNamesystem.LOG.warn("Not able to place enough replicas, still in need 
> of " + numOfReplicas);
> {code}
> However, numOfReplicas is passed into chooseRandom() as int (primitive type 
> in java) by value. The updating operation for numOfReplicas in chooseRandom() 
> will not change the value in chooseTarget(). 
> The root cause for testChooseTargetWithStaleNodes() and 
> testChooseTargetWithHalfStaleNodes() is the current 
> BlockPlacementPolicyDefault#chooseTarget() doesn't check if a node is stale.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: False Positives in Release audit problems report? - confirmation required

2013-07-11 Thread Chris Nauroth
Yes, I suspect this is a false positive due to some intermediate files left
behind by a test run.  Please feel free to post your patch.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Thu, Jul 11, 2013 at 6:49 AM, Vivek Ganesan wrote:

> Hi,
>
> I am on my way to contribute my first patch to hdfs.
>
> My patch has the following files.
>
> viv@viv-Ideapad-Z570:~/Work/**hadoop-trunk$ svn stat
> M hadoop-hdfs-project/hadoop-**hdfs/src/test/java/org/apache/**
> hadoop/hdfs/server/datanode/**TestDatanodeJsp.java
> M hadoop-hdfs-project/hadoop-**hdfs/src/main/java/org/apache/**
> hadoop/hdfs/server/datanode/**DatanodeJspHelper.java
>
>
> But, I get releaseAudit violations as follows in /tmp/**
> patchReleaseAuditProblems.txt
>
>  !? /home/viv/Work/hadoop-trunk/**hadoop-hdfs-project/hadoop-**
> hdfs/build/test/data/dfs/**name1/current/VERSION
>  !? /home/viv/Work/hadoop-trunk/**hadoop-hdfs-project/hadoop-**
> hdfs/build/test/data/dfs/**name1/current/fsimage_**000.md5
>  !? /home/viv/Work/hadoop-trunk/**hadoop-hdfs-project/hadoop-**
> hdfs/build/test/data/dfs/**name1/current/seen_txid
>  !? /home/viv/Work/hadoop-trunk/**hadoop-hdfs-project/hadoop-**
> hdfs/build/test/data/dfs/**name2/current/VERSION
>  !? /home/viv/Work/hadoop-trunk/**hadoop-hdfs-project/hadoop-**
> hdfs/build/test/data/dfs/**name2/current/fsimage_**000.md5
>  !? /home/viv/Work/hadoop-trunk/**hadoop-hdfs-project/hadoop-**
> hdfs/build/test/data/dfs/**name2/current/seen_txid
> Lines that start with ? in the release audit report indicate files
> that do not have an Apache license header.
>
> Actually, the build directory is not source controlled and not a part of
> the patch.
>
> Is this a false positive?
>
> Thanks in advance.
>
> Regards,
> Vivek Ganesan
>


[jira] [Created] (HDFS-4981) chmod 777 the .snapshot directory does not error that modification on RO snapshot is disallowed

2013-07-11 Thread Stephen Chu (JIRA)
Stephen Chu created HDFS-4981:
-

 Summary: chmod 777 the .snapshot directory does not error that 
modification on RO snapshot is disallowed
 Key: HDFS-4981
 URL: https://issues.apache.org/jira/browse/HDFS-4981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots
Affects Versions: 2.0.4-alpha, 3.0.0
Reporter: Stephen Chu
Priority: Trivial


Snapshots currently are RO, so it's expected that when someone tries to modify 
the .snapshot directory s/he is denied.

However, if the user tries to chmod 777 the .snapshot directory, the operation 
does not error. The user should be alerted that modifications are not allowed, 
even if this operation didn't actually change anything.

Using other modes will trigger the error, though.

{code}
[schu@hdfs-snapshots-1 hdfs]$ sudo -u hdfs hdfs dfs -chmod 777 
/user/schu/test_dir_1/.snapshot/
[schu@hdfs-snapshots-1 hdfs]$ sudo -u hdfs hdfs dfs -chmod 755 
/user/schu/test_dir_1/.snapshot/
chmod: changing permissions of '/user/schu/test_dir_1/.snapshot': Modification 
on a read-only snapshot is disallowed
[schu@hdfs-snapshots-1 hdfs]$ sudo -u hdfs hdfs dfs -chmod 435 
/user/schu/test_dir_1/.snapshot/
chmod: changing permissions of '/user/schu/test_dir_1/.snapshot': Modification 
on a read-only snapshot is disallowed
[schu@hdfs-snapshots-1 hdfs]$ sudo -u hdfs hdfs dfs -chown hdfs 
/user/schu/test_dir_1/.snapshot/
chown: changing ownership of '/user/schu/test_dir_1/.snapshot': Modification on 
a read-only snapshot is disallowed
[schu@hdfs-snapshots-1 hdfs]$ sudo -u hdfs hdfs dfs -chown schu 
/user/schu/test_dir_1/.snapshot/
chown: changing ownership of '/user/schu/test_dir_1/.snapshot': Modification on 
a read-only snapshot is disallowed
[schu@hdfs-snapshots-1 hdfs]$ 
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4982) JournalNode should relogin from keytab before fetching logs from other JNs

2013-07-11 Thread Todd Lipcon (JIRA)
Todd Lipcon created HDFS-4982:
-

 Summary: JournalNode should relogin from keytab before fetching 
logs from other JNs
 Key: HDFS-4982
 URL: https://issues.apache.org/jira/browse/HDFS-4982
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: journal-node, security
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Todd Lipcon
Assignee: Todd Lipcon


We've seen an issue in a secure cluster where, after a failover, the new NN 
isn't able to properly coordinate QJM recovery. The JNs fail to fetch logs from 
each other due to apparently not having a Kerberos TGT. It seems that we need 
to add the {{checkTGTAndReloginFromKeytab}} call prior to making the HTTP 
connection, since the java HTTP stuff doesn't do an automatic relogin

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-07-11 Thread Harsh J (JIRA)
Harsh J created HDFS-4983:
-

 Summary: Numeric usernames do not work with WebHDFS FS
 Key: HDFS-4983
 URL: https://issues.apache.org/jira/browse/HDFS-4983
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Affects Versions: 2.0.0-alpha
Reporter: Harsh J


Per the file 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
 the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.

Given this, using a username such as "123" seems to fail for some reason (tried 
on insecure setup):

{code}
[123@host-1 ~]$ whoami
123
[123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
-ls: Invalid value: "123" does not belong to the domain 
^[A-Za-z_][A-Za-z0-9._-]*[$]?$
Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4249) Add status NameNode startup to webUI

2013-07-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-4249.
-

   Resolution: Fixed
Fix Version/s: 2.1.0-beta
   3.0.0
 Hadoop Flags: Reviewed

All related patches have been committed to trunk, branch-2, branch-2.1-beta, 
and branch-2.1.0-beta.

> Add status NameNode startup to webUI 
> -
>
> Key: HDFS-4249
> URL: https://issues.apache.org/jira/browse/HDFS-4249
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Suresh Srinivas
>Assignee: Chris Nauroth
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: HDFS-4249.1.pdf, HDFS-4249-1.png, HDFS-4249-2.png, 
> HDFS-4249-3.png, HDFS-4249-4.png, HDFS-4249-5.png
>
>
> Currently NameNode WebUI server starts only after the fsimage is loaded, 
> edits are applied and checkpoint is complete. Any status related to namenode 
> startin up is available only in the logs. I propose starting the webserver 
> before loading namespace and providing namenode startup information.
> More details in the next comment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: please, subscribe me!

2013-07-11 Thread Zhijie Shen
yarn-dev-subscr...@hadoop.apache.org is the correct email address for dev
mailing list subscription. Similar for other projects' dev mailing list
subscription. Please check http://hadoop.apache.org/mailing_lists.html for
detail.


On Thu, Jul 11, 2013 at 11:26 PM, Man-Young Goo  wrote:

> please, subscribe me!
>



-- 
Zhijie Shen
Hortonworks Inc.
http://hortonworks.com/