Re: Review Request: HDFS-2334: Add Closeable to JournalManager

2011-10-24 Thread Ivan Kelly

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/2247/
---

(Updated 2011-10-24 09:18:46.680787)


Review request for hadoop-hdfs.


Changes
---

Added check for null on JournalAndStream#close


Summary
---

A JournalManager may take hold of resources for the duration of their lifetime. 
This isn't the case at the moment for FileJournalManager, but 
BookKeeperJournalManager will, and it's conceivable that FileJournalManager 
could take a lock on a directory etc.

This JIRA is to add Closeable to JournalManager so that these resources can be 
cleaned up when FSEditLog is closed.


This addresses bug HDFS-2334.
http://issues.apache.org/jira/browse/HDFS-2334


Diffs (updated)
-

  
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
 aac2a35 
  
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
 8cfc975 
  
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/JournalManager.java
 0bb7b0f 
  
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupJournalManager.java
 6976620 
  
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/JournalSet.java
 0d6bc74 

Diff: https://reviews.apache.org/r/2247/diff


Testing
---


Thanks,

Ivan



[jira] [Created] (HDFS-2495) Increase granularity of write operations in ReplicationMonitor thus reducing contention for write lock

2011-10-24 Thread Tomasz Nykiel (Created) (JIRA)
Increase granularity of write operations in ReplicationMonitor thus reducing 
contention for write lock
--

 Key: HDFS-2495
 URL: https://issues.apache.org/jira/browse/HDFS-2495
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tomasz Nykiel
Assignee: Tomasz Nykiel


For processing blocks in ReplicationMonitor 
(BlockManager.computeReplicationWork), we first obtain a list of blocks to be 
replicated by calling chooseUnderReplicatedBlocks, and then for each block 
which was found, we call computeReplicationWorkForBlock. The latter processes a 
block in three stages, acquiring the writelock twice per call:

1. obtaining block related info (livenodes, srcnode, etc.) under lock
2. choosing target for replication
3. scheduling replication (under lock)

We would like to change this behaviour and decrease contention for the write 
lock, by batching blocks and executing 1,2,3, for sets of blocks, rather than 
for each one separately. This would decrease the number of writeLock to 2, from 
2*numberofblocks.

Also, the info level logging can be pushed outside the writelock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hadoop-Hdfs-0.23-Build - Build # 49 - Still Unstable

2011-10-24 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/49/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 9793 lines...]
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (tar) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs 
---
[INFO] 
[INFO] There are 9019 checkstyle errors.
[WARNING] Unable to locate Source XRef to link to - DISABLED
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is true
[INFO] ** FindBugsMojo executeFindbugs ***
[INFO] Temp File is 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/findbugsTemp.xml
[INFO] Fork Value is true
[INFO] xmlOutput is false
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS Project 0.23.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [3:22.838s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.056s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 3:23.340s
[INFO] Finished at: Mon Oct 24 11:36:42 UTC 2011
[INFO] Final Memory: 58M/754M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Publishing Clover coverage report...
Publishing Clover HTML report...
Publishing Clover XML report...
Publishing Clover coverage results...
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Updating HDFS-2452
Updating HDFS-2485
Updating MAPREDUCE-2708
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestDfsOverAvroRpc.testWorkingDirectory

Error Message:
Two methods with same name: delete

Stack Trace:
org.apache.avro.AvroTypeException: Two methods with same name: delete
at org.apache.avro.reflect.ReflectData.getProtocol(ReflectData.java:394)
at 
org.apache.avro.ipc.reflect.ReflectResponder.(ReflectResponder.java:36)
at 
org.apache.hadoop.ipc.AvroRpcEngine.createResponder(AvroRpcEngine.java:189)
at 
org.apache.hadoop.ipc.AvroRpcEngine$TunnelResponder.(AvroRpcEngine.java:196)
at org.apache.hadoop.ipc.AvroRpcEngine.getServer(AvroRpcEngine.java:232)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:550)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:144)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:350)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:328)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:452)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:444)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:742)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:641)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:545)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:261)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:85)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:247)
at 
org.apache.hadoop.hdfs.TestLocalDFS.__CLR3_0_2hl5jzp16g3(TestLocalDFS.java:64)
at 
org.apache.hadoop.hdfs.TestLocalDFS.testWorkingDirectory(TestLocalDFS.java:62)
at 
org.apache.hadoop.hdfs.TestDfsO

Jenkins build is still unstable: Hadoop-Hdfs-0.23-Build #49

2011-10-24 Thread Apache Jenkins Server
See 




Hadoop-Hdfs-trunk - Build # 841 - Still Failing

2011-10-24 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/841/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 10138 lines...]
[ERROR] [3] [INFO] Invalid artifact specification: 
'hadoop-assemblies/src/main/resources/assemblies/hadoop-src.xml'. Must contain 
at least three fields, separated by ':'.
[ERROR] 
[ERROR] [4] [INFO] Failed to resolve classpath resource: 
/assemblies/hadoop-assemblies/src/main/resources/assemblies/hadoop-src.xml from 
classloader: 
ClassRealm[plugin>org.apache.maven.plugins:maven-assembly-plugin:2.2-beta-3, 
parent: sun.misc.Launcher$AppClassLoader@126b249]
[ERROR] 
[ERROR] [5] [INFO] Failed to resolve classpath resource: 
hadoop-assemblies/src/main/resources/assemblies/hadoop-src.xml from 
classloader: 
ClassRealm[plugin>org.apache.maven.plugins:maven-assembly-plugin:2.2-beta-3, 
parent: sun.misc.Launcher$AppClassLoader@126b249]
[ERROR] 
[ERROR] [6] [INFO] File: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-assemblies/src/main/resources/assemblies/hadoop-src.xml
 does not exist.
[ERROR] 
[ERROR] [7] [INFO] Building URL from location: 
hadoop-assemblies/src/main/resources/assemblies/hadoop-src.xml
[ERROR] Error:
[ERROR] java.net.MalformedURLException: no protocol: 
hadoop-assemblies/src/main/resources/assemblies/hadoop-src.xml
[ERROR] at java.net.URL.(URL.java:567)
[ERROR] at java.net.URL.(URL.java:464)
[ERROR] at java.net.URL.(URL.java:413)
[ERROR] at 
org.apache.maven.shared.io.location.URLLocatorStrategy.resolve(URLLocatorStrategy.java:54)
[ERROR] at org.apache.maven.shared.io.location.Locator.resolve(Locator.java:81)
[ERROR] at 
org.apache.maven.plugin.assembly.io.DefaultAssemblyReader.addAssemblyFromDescriptor(DefaultAssemblyReader.java:309)
[ERROR] at 
org.apache.maven.plugin.assembly.io.DefaultAssemblyReader.readAssemblies(DefaultAssemblyReader.java:140)
[ERROR] at 
org.apache.maven.plugin.assembly.mojos.AbstractAssemblyMojo.execute(AbstractAssemblyMojo.java:328)
[ERROR] at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
[ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
[ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
[ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
[ERROR] at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
[ERROR] at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
[ERROR] at 
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
[ERROR] at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
[ERROR] at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:319)
[ERROR] at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
[ERROR] at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
[ERROR] at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
[ERROR] at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
[ERROR] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[ERROR] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[ERROR] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[ERROR] at java.lang.reflect.Method.invoke(Method.java:597)
[ERROR] at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
[ERROR] at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
[ERROR] at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
[ERROR] at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Publishing Javadoc
Recording fingerprints
Updating HDFS-2452
Updating HDFS-2485
Updating MAPREDUCE-2708
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
###

Build failed in Jenkins: Hadoop-Hdfs-trunk #841

2011-10-24 Thread Apache Jenkins Server
See 

Changes:

[vinodkv] MAPREDUCE-2708. Designed and implemented MR Application Master 
recovery to make MR AMs resume their progress after restart. Contributed by 
Sharad Agarwal.

[shv] HDFS-2452. OutOfMemoryError in DataXceiverServer takes down the DataNode. 
Contributed by Uma Maheswara Rao.

[stevel] HDFS-2485

[stevel] HDFS-2485

--
[...truncated 9945 lines...]
  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 


[jira] [Created] (HDFS-2496) Separate datatypes for DatanodeProtocol

2011-10-24 Thread Suresh Srinivas (Created) (JIRA)
Separate datatypes for DatanodeProtocol
---

 Key: HDFS-2496
 URL: https://issues.apache.org/jira/browse/HDFS-2496
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.23.0, 0.24.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas


This jira separates for DatanodeProtocol the wire types from the types used by 
the client and server, similar to HDFS-2181.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2497) Fix TestBackupNode failure

2011-10-24 Thread Suresh Srinivas (Created) (JIRA)
Fix TestBackupNode failure
--

 Key: HDFS-2497
 URL: https://issues.apache.org/jira/browse/HDFS-2497
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.0, 0.24.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 0.24.0
 Attachments: HDFS-2497.txt

TestBackupNode fails due to NullPointerException

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hadoop-Hdfs-trunk-Commit - Build # 1221 - Failure

2011-10-24 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1221/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7034 lines...]
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java:[873,39]
 cannot find symbol
symbol  : variable RegisterCommand
location: class org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:[1422,34]
 cannot find symbol
symbol  : class FinalizeCommand
location: class org.apache.hadoop.hdfs.server.datanode.DataNode.BPOfferService
[INFO] 9 errors 
[INFO] -
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [7.943s]
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 8.161s
[INFO] Finished at: Mon Oct 24 18:42:17 UTC 2011
[INFO] Final Memory: 19M/249M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.3.2:compile (default-compile) 
on project hadoop-hdfs: Compilation failure: Compilation failure:
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java:[72,45]
 cannot find symbol
[ERROR] symbol  : class FinalizeCommand
[ERROR] location: package org.apache.hadoop.hdfs.server.protocol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java:[63,45]
 cannot find symbol
[ERROR] symbol  : class RegisterCommand
[ERROR] location: package org.apache.hadoop.hdfs.server.protocol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:[154,45]
 cannot find symbol
[ERROR] symbol  : class FinalizeCommand
[ERROR] location: package org.apache.hadoop.hdfs.server.protocol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeCommand.java:[31,6]
 cannot find symbol
[ERROR] symbol: class RegisterCommand
[ERROR] RegisterCommand.class, FinalizeCommand.class,
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeCommand.java:[31,29]
 cannot find symbol
[ERROR] symbol: class FinalizeCommand
[ERROR] RegisterCommand.class, FinalizeCommand.class,
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java:[807,17]
 cannot find symbol
[ERROR] symbol  : class FinalizeCommand
[ERROR] location: class org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java:[863,39]
 cannot find symbol
[ERROR] symbol  : variable RegisterCommand
[ERROR] location: class 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java:[873,39]
 cannot find symbol
[ERROR] symbol  : variable RegisterCommand
[ERROR] location: class 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:[1422,34]
 cannot find symbol
[ERROR] symbol  : class FinalizeCommand
[ERROR] location: class 
org.apache.hadoop.hdfs.server.datanode.DataNode.BPOfferService
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, ple

Hadoop-Hdfs-22-branch - Build # 104 - Still Failing

2011-10-24 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/104/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 706711 lines...]
[junit] 2011-10-24 21:06:22,601 INFO  datanode.DataNode 
(DataNode.java:shutdown(770)) - Waiting for threadgroup to exit, active threads 
is 2
[junit] 2011-10-24 21:06:22,602 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2011-10-24 21:06:22,602 INFO  datanode.DataNode 
(DataNode.java:run(1451)) - DatanodeRegistration(127.0.0.1:33846, 
storageID=DS-1713354651-67.195.138.28-33846-1319490372000, infoPort=60237, 
ipcPort=47606):Finishing DataNode in: 
FSDataset{dirpath='/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
[junit] 2011-10-24 21:06:22,602 INFO  ipc.Server (Server.java:stop(1693)) - 
Stopping server on 47606
[junit] 2011-10-24 21:06:22,602 INFO  datanode.DataNode 
(DataNode.java:shutdown(770)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2011-10-24 21:06:22,603 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk 
service threads...
[junit] 2011-10-24 21:06:22,603 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads 
have been shut down.
[junit] 2011-10-24 21:06:22,603 WARN  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already 
shut down.
[junit] 2011-10-24 21:06:22,603 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
[junit] 2011-10-24 21:06:22,705 INFO  ipc.Server (Server.java:stop(1693)) - 
Stopping server on 56297
[junit] 2011-10-24 21:06:22,705 INFO  ipc.Server (Server.java:run(1525)) - 
IPC Server handler 0 on 56297: exiting
[junit] 2011-10-24 21:06:22,705 INFO  ipc.Server (Server.java:run(498)) - 
Stopping IPC Server listener on 56297
[junit] 2011-10-24 21:06:22,705 INFO  ipc.Server (Server.java:run(638)) - 
Stopping IPC Server Responder
[junit] 2011-10-24 21:06:22,705 INFO  datanode.DataNode 
(DataNode.java:shutdown(770)) - Waiting for threadgroup to exit, active threads 
is 1
[junit] 2011-10-24 21:06:22,806 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2011-10-24 21:06:22,806 INFO  datanode.DataNode 
(DataNode.java:run(1451)) - DatanodeRegistration(127.0.0.1:48594, 
storageID=DS-2089348076-67.195.138.28-48594-1319490371879, infoPort=35335, 
ipcPort=56297):Finishing DataNode in: 
FSDataset{dirpath='/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
[junit] 2011-10-24 21:06:22,806 INFO  ipc.Server (Server.java:stop(1693)) - 
Stopping server on 56297
[junit] 2011-10-24 21:06:22,806 INFO  datanode.DataNode 
(DataNode.java:shutdown(770)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2011-10-24 21:06:22,807 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk 
service threads...
[junit] 2011-10-24 21:06:22,807 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads 
have been shut down.
[junit] 2011-10-24 21:06:22,807 WARN  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already 
shut down.
[junit] 2011-10-24 21:06:22,909 WARN  namenode.FSNamesystem 
(FSNamesystem.java:run(2911)) - ReplicationMonitor thread received 
InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2011-10-24 21:06:22,909 WARN  namenode.DecommissionManager 
(DecommissionManager.java:run(70)) - Monitor interrupted: 
java.lang.InterruptedException: sleep interrupted
[junit] 2011-10-24 21:06:22,909 INFO  namenode.FSEditLog 
(FSEditLog.java:printStatistics(637)) - Number of transactions: 6 Total time 
for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of 
syncs: 4 SyncTimes(ms): 61 3 
[junit] 2011-10-24 21:06:22,921 INFO  ipc.Server (Server.java:stop(1693)) - 
Stopping server on 45512
[junit] 2011-10-24 21:06:22,921 INFO  ipc.Server (Server.java:run(1525)) - 
IPC Server handler 0 on 45512: exiting
[junit] 2011-10-24 21:06:22,921 INFO  ipc.Server (Server.java:run(1525)) - 
IPC Server handler 1 on 45512: exiting
[junit] 2011-10-24 21:06:22,921 INFO  ipc.Server 

[jira] [Created] (HDFS-2498) TestParallelRead times out consistently on Jenkins

2011-10-24 Thread Konstantin Shvachko (Created) (JIRA)
TestParallelRead times out consistently on Jenkins
--

 Key: HDFS-2498
 URL: https://issues.apache.org/jira/browse/HDFS-2498
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.22.0
Reporter: Konstantin Shvachko
 Fix For: 0.22.0


During last several Jenkins builds TestParallelRead consistently fails. See 
Hadoop-Hdfs-22-branch for logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: replication in HDFS

2011-10-24 Thread Ramkumar Vadali
(sorry for the delay in replying)

Hi Zheng

You are right about HDFS RAID. It is used to save space, and is not involved
in the file write path. The generation of parity blocks and reducing
replication factor happens after a configurable amount of time.

What is the design you have in mind? When the HDFS file is being written,
the data is generated block-by-block. But generating parity blocks will
require multiple source blocks to be ready, so the writer will need to
buffer the original data, either in memory or on disk. If it is saved on
disk because of memory pressure, will this be similar to writing the file
with replication 2?

Ram


On Thu, Oct 13, 2011 at 1:16 AM, Zheng Da  wrote:

> Hello all,
>
> Right now HDFS is still using simple replication to increase data
> reliability. Even though it works, it just wastes the disk space,
> network and disk bandwidth. For data-intensive applications (that
> needs to write large result to the HDFS), it just limits the
> throughput of MapReduce. Also it's very energy-inefficient.
>
> Is the community trying to use erasure code to increase data
> reliability? I know someone is working on HDFS-RAID, but it can only
> solve the problem in disk space. In many case, network and disk
> bandwidth are more important, which are the factors limiting the
> throughput of MapReduce. Has anyone tried to use erasure code to
> reduce the size of data when data is written to HDFS? I know reducing
> replications might hurt the read performance, but I think it's still
> important to reduce the data size writing to HDFS.
>
> Thanks,
> Da
>


[jira] [Created] (HDFS-2499) Fix RPC client creation bug from HDFS-2459

2011-10-24 Thread Suresh Srinivas (Created) (JIRA)
Fix RPC client creation bug from HDFS-2459
--

 Key: HDFS-2499
 URL: https://issues.apache.org/jira/browse/HDFS-2499
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.23.0, 0.24.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-2499.txt

HDFS-2459 incorrectly implemented the RPC getProxy for the JournalProtocol 
client side. It sets retry policies and other policies that are not necessary

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira