[jira] [Reopened] (HDFS-4425) NameNode low on available disk space

2013-01-23 Thread project (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

project reopened HDFS-4425:
---


it is not fixed yet as my root partition has space in GB's 

/dev/mapper/vg_operamast1-lv_root
   50G   33G   14G  71% /


> NameNode low on available disk space
> 
>
> Key: HDFS-4425
> URL: https://issues.apache.org/jira/browse/HDFS-4425
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: project
>Priority: Critical
>
> Hi,
> Namenode switches into safemode when it has low disk space on the root fs / i 
> have to manually run a command to leave it. Below are log messages for low 
> space on root / fs. Is there any parameter so that i can reduce reserved 
> amount.
> 2013-01-21 01:22:52,217 WARN 
> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space 
> available on volume '/dev/mapper/vg_lv_root' is 10653696, which is below the 
> configured reserved amount 104857600
> 2013-01-21 01:22:52,218 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
> available disk space. Entering safe mode.
> 2013-01-21 01:22:52,218 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
> mode is ON.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hadoop-Hdfs-0.23-Build - Build # 503 - Still Failing

2013-01-23 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/503/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 9203 lines...]
 [exec] There appears to be a problem with your site build.
 [exec] 
 [exec] Read the output above:
 [exec] * Cocoon will report the status of each document:
 [exec] - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
 [exec] * Even if only one link is broken, you will still get "failed".
 [exec] * Your site would still be generated, but some pages would be 
broken.
 [exec]   - See 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml
 [exec] 
 [exec] Total time: 18 seconds
 [exec] 
 [exec]   Copying broken links file to site root.
 [exec]   
 [exec] Copying 1 file to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [1:58.238s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:58.848s
[INFO] Finished at: Wed Jan 23 11:35:17 UTC 2013
[INFO] Final Memory: 35M/357M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project 
hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Recording test results
Publishing Javadoc
ERROR: Publisher hudson.tasks.JavadocArchiver aborted due to exception
java.lang.IllegalStateException: basedir 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/site/api
 does not exist.
at org.apache.tools.ant.DirectoryScanner.scan(DirectoryScanner.java:879)
at hudson.FilePath$37.hasMatch(FilePath.java:2109)
at hudson.FilePath$37.invoke(FilePath.java:2006)
at hudson.FilePath$37.invoke(FilePath.java:1996)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2309)
at hudson.remoting.UserRequest.perform(UserRequest.java:118)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Recording fingerprints
Updating MAPREDUCE-4946
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-0.23-Build #503

2013-01-23 Thread Apache Jenkins Server
See 

Changes:

[sseth] MAPREDUCE-4946. Fix a performance problem for large jobs by reducing 
the number of map completion event type conversions. Contributed by Jason Lowe.

--
[...truncated 9010 lines...]
 [exec] Warning: 

 not found.
 [exec] Warning: 

 not found.
 [exec] Warning: 

 not found.
 [exec] Warning: 

 not found.
 [exec] 1 file(s) have been successfully validated.
 [exec] ...validated skinconf
 [exec] 
 [exec] validate-sitemap:
 [exec] 
 [exec] validate-skins-stylesheets:
 [exec] 
 [exec] validate-skins:
 [exec] 
 [exec] validate-skinchoice:
 [exec] ...validated existence of skin 'pelt'
 [exec] 
 [exec] validate-stylesheets:
 [exec] 
 [exec] validate:
 [exec] 
 [exec] site:
 [exec] 
 [exec] Copying the various non-generated resources to site.
 [exec] Warnings will be issued if the optional project resources are not 
found.
 [exec] This is often the case, because they are optional and so may not be 
available.
 [exec] Copying project resources and images to site ...
 [exec] Copied 1 empty directory to 1 empty directory under 

 [exec] Copying main skin images to site ...
 [exec] Created dir: 

 [exec] Copying 20 files to 

 [exec] Copying 14 files to 

 [exec] Copying project skin images to site ...
 [exec] Copying main skin css and js files to site ...
 [exec] Copying 11 files to 

 [exec] Copied 4 empty directories to 3 empty directories under 

 [exec] Copying 4 files to 

 [exec] Copying project skin css and js files to site ...
 [exec] 
 [exec] Finished copying the non-generated resources.
 [exec] Now Cocoon will generate the rest.
 [exec]   
 [exec] 
 [exec] Static site will be generated at:
 [exec] 

 [exec] 
 [exec] Cocoon will report the status of each document:
 [exec]   - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
 [exec]   
 [exec] 
 
 [exec] cocoon 2.1.12-dev
 [exec] Copyright (c) 1999-2007 Apache Software Foundation. All rights 
reserved.
 [exec] 
 
 [exec] 
 [exec] 
 [exec] ^api/index.html
 [exec] ^jdiff/changes.html
 [exec] ^releasenotes.html
 [exec] ^changes.html
 [exec] * [1/26][26/30]   2.426s 8.6Kb   linkmap.html
 [exec] ^api/index.html
 [exec] ^jdiff/changes.html
 [exec] ^releasenotes.html
 [exec] ^changes.html
 [exec] * [2/26][1/29]0.766s 19.4Kb  hdfs_permissions_guide.html
 [exec] ^api/index.html
 [exec] ^jdiff/changes.html
 [exec] ^releasenotes.html
 [exec] ^changes.html
 [exec] ^
api/

Hadoop-Hdfs-trunk - Build # 1294 - Still Failing

2013-01-23 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1294/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 11682 lines...]
 [exec] ^jdiff/changes.html
 [exec] ^releasenotes.html
 [exec] ^changes.html
 [exec] * [51/3][1/29]0.293s 27.3Kb  hdfs_imageviewer.html
 [exec] * [52/2][0/0] 0.158s 31.0Kb  hdfs_imageviewer.pdf
 [exec] * [54/0][0/0] 0.023s 766bimages/favicon.ico
 [exec] Java Result: 1
 [exec] 
 [exec] BUILD FAILED
 [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error 
building site.
 [exec] 
 [exec] There appears to be a problem with your site build.
 [exec] 
 [exec] Read the output above:
 [exec] * Cocoon will report the status of each document:
 [exec] - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
 [exec] * Even if only one link is broken, you will still get "failed".
 [exec] * Your site would still be generated, but some pages would be 
broken.
 [exec]   - See 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml
 [exec] 
 [exec] Total time: 15 seconds
 [exec] Total time: 0 minutes 12 seconds,  Site size: 696,806 Site pages: 43
 [exec] 
 [exec]   Copying broken links file to site root.
 [exec]   
 [exec] Copying 1 file to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[1:22:56.567s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:22:57.332s
[INFO] Finished at: Wed Jan 23 12:56:34 UTC 2013
[INFO] Final Memory: 37M/407M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project 
hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4946
Updating MAPREDUCE-4808
Updating YARN-277
Updating HADOOP-9231
Updating YARN-231
Updating MAPREDUCE-4949
Updating YARN-319
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1294

2013-01-23 Thread Apache Jenkins Server
See 

Changes:

[tomwhite] YARN-319. Submitting a job to a fair scheduler queue for which the 
user does not have permission causes the client to wait forever. Contributed by 
shenhong.

[hitesh] YARN-231. RM Restart - Add FS-based persistent store implementation 
for RMStateStore. Contributed by Bikas Saha

[hitesh] YARN-277. Use AMRMClient in DistributedShell to exemplify the 
approach. Contributed by Bikas Saha

[suresh] HADOOP-9231. Add missing CHANGES.txt

[suresh] HADOOP-9231. Parametrize staging URL for the uniformity of 
distributionManagement. Contributed by Konstantin Boudnik.

[sseth] MAPREDUCE-4946. Fix a performance problem for large jobs by reducing 
the number of map completion event type conversions. Contributed by Jason Lowe.

[tucu] MAPREDUCE-4949. Enable multiple pi jobs to run in parallel. (sandyr via 
tucu)

[tucu] MAPREDUCE-4808. Refactor MapOutput and MergeManager to facilitate reuse 
by Shuffle implementations. (masokan via tucu)

--
[...truncated 11489 lines...]
 [exec] examine-proj:
 [exec] 
 [exec] validation-props:
 [exec] Using these catalog descriptors: 
/home/jenkins/tools/forrest/latest/main/webapp/resources/schema/catalog.xcat:/home/jenkins/tools/forrest/latest/build/plugins/catalog.xcat:
 [exec] 
 [exec] validate-xdocs:
 [exec] 12 file(s) have been successfully validated.
 [exec] ...validated xdocs
 [exec] 
 [exec] validate-skinconf:
 [exec] Warning: 

 not found.
 [exec] Warning: 

 not found.
 [exec] Warning: 

 not found.
 [exec] Warning: 

 not found.
 [exec] Warning: 

 not found.
 [exec] 1 file(s) have been successfully validated.
 [exec] ...validated skinconf
 [exec] 
 [exec] validate-sitemap:
 [exec] 
 [exec] validate-skins-stylesheets:
 [exec] 
 [exec] validate-skins:
 [exec] 
 [exec] validate-skinchoice:
 [exec] ...validated existence of skin 'pelt'
 [exec] 
 [exec] validate-stylesheets:
 [exec] 
 [exec] validate:
 [exec] 
 [exec] site:
 [exec] 
 [exec] Copying the various non-generated resources to site.
 [exec] Warnings will be issued if the optional project resources are not 
found.
 [exec] This is often the case, because they are optional and so may not be 
available.
 [exec] Copying project resources and images to site ...
 [exec] Copied 1 empty directory to 1 empty directory under 

 [exec] Copying main skin images to site ...
 [exec] Created dir: 

 [exec] Copying 20 files to 

 [exec] Copying 14 files to 

 [exec] Copying project skin images to site ...
 [exec] Copying main skin css and js files to site ...
 [exec] Copying 11 files to 

 [exec] Copied 4 empty directories to 3 empty directories under 

 [exec] Copying 4 files to 

 [exec] Copying project skin css and js files to site ...
 [exec] 
 [exec] Finished copying the non-generated resources.
 [exec] Now Cocoon will generate the rest.
 [exec]   
 [exec] 
 [exec] Static site will be generated at:
 [exec] 


[jira] [Created] (HDFS-4431) Support snapshot in OfflineImageViewer

2013-01-23 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-4431:
---

 Summary: Support snapshot in OfflineImageViewer
 Key: HDFS-4431
 URL: https://issues.apache.org/jira/browse/HDFS-4431
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao


Add support in OfflineImageViewer to read fsimage with snapshot-related 
information (version -41).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4432) Support INodeFileUnderConstructionWithSnapshot in FSImage saving/loading

2013-01-23 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-4432:
---

 Summary: Support INodeFileUnderConstructionWithSnapshot in FSImage 
saving/loading
 Key: HDFS-4432
 URL: https://issues.apache.org/jira/browse/HDFS-4432
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao


1. FSImage saver/loader need to be able to recognize 
INodeFileUnderConstruction, INodeFileUnderConstructionWithSnapshot, and 
INodeFileUnderConstructionSnapshot.

2. FSImage saver/loader should be able to form the correct circular linked list 
for INodeFileUnderConstruction(With)Snapshot.

3. Add new corresponding unit tests and support file appending in 
TestSnapshot#testSnapshot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4433) make TestPeerCache not flaky

2013-01-23 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-4433:
--

 Summary: make TestPeerCache not flaky
 Key: HDFS-4433
 URL: https://issues.apache.org/jira/browse/HDFS-4433
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-4433.001.patch

TestPeerCache is flaky now because it relies on using the same global cache for 
every test function.  So the cache timeout can't be set to something different 
for each test.

Also, we should implement equals and hashCode for {{FakePeer}}, since otherwise 
{{testMultiplePeersWithSameDnId}} is not really testing what happens when 
multiple equal peers are inserted into the cache.  (The default equals is 
object equality).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4433) make TestPeerCache not flaky

2013-01-23 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-4433.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

> make TestPeerCache not flaky
> 
>
> Key: HDFS-4433
> URL: https://issues.apache.org/jira/browse/HDFS-4433
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client, performance
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4433.001.patch
>
>
> TestPeerCache is flaky now because it relies on using the same global cache 
> for every test function.  So the cache timeout can't be set to something 
> different for each test.
> Also, we should implement equals and hashCode for {{FakePeer}}, since 
> otherwise {{testMultiplePeersWithSameDnId}} is not really testing what 
> happens when multiple equal peers are inserted into the cache.  (The default 
> equals is object equality).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4434) Provide a mapping from INodeId to INode

2013-01-23 Thread Brandon Li (JIRA)
Brandon Li created HDFS-4434:


 Summary: Provide a mapping from INodeId to INode
 Key: HDFS-4434
 URL: https://issues.apache.org/jira/browse/HDFS-4434
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li


This JIRA is to provide a way to access the INode via its id. The proposed 
solution is to have an in-memory mapping from INodeId to INode. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4436) The snapshot copy INode.recordModification(..) returned is never used

2013-01-23 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HDFS-4436:


 Summary: The snapshot copy INode.recordModification(..) returned 
is never used
 Key: HDFS-4436
 URL: https://issues.apache.org/jira/browse/HDFS-4436
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor


INode.recordModification(..) and some other related currently returns a pair of 
current object and the snapshot copy.  However, the snapshot copy is never used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4425) NameNode low on available disk space

2013-01-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-4425.
--

Resolution: Cannot Reproduce

Again, I'm closing this issue
# JIRA is not the place to deal with what is currently believed to be a config 
problem affecting a single user.
# Hadoop 2.0.0-cdh4.1.2 is not an ASF release: you need to take it up with 
Cloudera via their support channels, both paid and mailing list.
# If it is found to be something in the apache codebase then they can file a 
replicable bug here which we can all fix together.
# 'critical' is reserved for issues that are considered risks to data across 
multiple clusters. This may be critical for you, but given it works for 
everyone else, minor.

Please, please, please don't use JIRAs for these kind of problems -it's like 
asking Linus for help getting networking right on your ubuntu laptop.

> NameNode low on available disk space
> 
>
> Key: HDFS-4425
> URL: https://issues.apache.org/jira/browse/HDFS-4425
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: project
>Priority: Critical
>
> Hi,
> Namenode switches into safemode when it has low disk space on the root fs / i 
> have to manually run a command to leave it. Below are log messages for low 
> space on root / fs. Is there any parameter so that i can reduce reserved 
> amount.
> 2013-01-21 01:22:52,217 WARN 
> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space 
> available on volume '/dev/mapper/vg_lv_root' is 10653696, which is below the 
> configured reserved amount 104857600
> 2013-01-21 01:22:52,218 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
> available disk space. Entering safe mode.
> 2013-01-21 01:22:52,218 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
> mode is ON.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira