[jira] Created: (HDFS-1512) BlockSender calls deprecated method getReplica

2010-11-19 Thread Eli Collins (JIRA)
BlockSender calls deprecated method getReplica
--

 Key: HDFS-1512
 URL: https://issues.apache.org/jira/browse/HDFS-1512
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Reporter: Eli Collins


HDFS-680 deprecated FSDatasetInterface#getReplica, however it is still used by 
BlockSender which still maintains a Replica member.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-1513) Fix a number of warnings

2010-11-19 Thread Eli Collins (JIRA)
Fix a number of warnings


 Key: HDFS-1513
 URL: https://issues.apache.org/jira/browse/HDFS-1513
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 0.22.0, 0.23.0


There are a bunch of warnings besides DeprecatedUTF8, HDFS-1512 and two 
warnings caused by a Java bug (http://bugs.sun.com/view_bug.do?bug_id=646014) 
that we can fix.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Build failed in Hudson: Hadoop-Hdfs-trunk #492

2010-11-19 Thread Apache Hudson Server
See 

Changes:

[eli] HDFS-1001. DataXceiver and BlockReader disagree on when to send/recv 
CHECKSUM_OK. Contributed by bc Wong

[eli] Revert HDFS-1467, causing test timeouts.

[nigel] HADOOP-7042. Updates to test-patch.sh to include failed test names and 
improve other messaging.  Contributed by nigel.

[nigel] HDFS-1510. Added test-patch.properties required by test-patch.sh.  
Contributed by nigel

[nigel] HADOOP-7042. Updates to test-patch.sh to include failed test names and 
improve other messaging.  Contributed by nigel.

[nigel] HADOOP-7042. Updates to test-patch.sh to include failed test names and 
improve other messaging.  Contributed by nigel.

[cos] Adding IntelliJ IDEA specific extentions to be ignored.

[eli] HDFS-528. Add ability for safemode to wait for a minimum number of live 
datanodes. Contributed by Todd Lipcon

--
[...truncated 887034 lines...]
[junit] at 
sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
[junit] at 
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:60)
[junit] at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
[junit] at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:151)
[junit] at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:112)
[junit] at java.io.DataOutputStream.writeLong(DataOutputStream.java:207)
[junit] at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.write(DataTransferProtocol.java:542)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.write_aroundBody0(BlockReceiver.java:932)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.write_aroundBody1$advice(BlockReceiver.java:160)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:932)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit] 2010-11-19 15:28:18,265 WARN  datanode.DataNode 
(DataNode.java:checkDiskError(828)) - checkDiskError: exception: 
[junit] java.io.IOException: Connection reset by peer
[junit] at sun.nio.ch.FileDispatcher.write0(Native Method)
[junit] at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
[junit] at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104)
[junit] at sun.nio.ch.IOUtil.write(IOUtil.java:75)
[junit] at 
sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
[junit] at 
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:60)
[junit] at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
[junit] at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:151)
[junit] at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:112)
[junit] at java.io.DataOutputStream.writeLong(DataOutputStream.java:207)
[junit] at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.write(DataTransferProtocol.java:542)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.write_aroundBody0(BlockReceiver.java:932)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.write_aroundBody1$advice(BlockReceiver.java:160)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:932)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit] 2010-11-19 15:28:18,266 INFO  datanode.DataNode 
(BlockReceiver.java:run(956)) - PacketResponder blk_7898053707441303896_1001 2 
Exception java.io.IOException: Connection reset by peer
[junit] at sun.nio.ch.FileDispatcher.write0(Native Method)
[junit] at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
[junit] at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104)
[junit] at sun.nio.ch.IOUtil.write(IOUtil.java:75)
[junit] at 
sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
[junit] at 
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:60)
[junit] at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
[junit] at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:151)
[junit] at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:112)
[junit] at java.io.DataOutputStream.writeLong(DataOutputStream.java:207)
[junit] at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.write(DataTransferProtocol.java:542)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.write_aroundBody0(BlockReceiver.java:932)
[junit

Re: Review Request: Populate needed replication queues before leaving safe mode.

2010-11-19 Thread Patrick Kling

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/105/
---

(Updated 2010-11-19 13:07:20.231197)


Review request for hadoop-hdfs.


Changes
---

Updated test case to play nice with HDFS-1482.


Summary
---

This patch introduces a new configuration variable 
dfs.namenode.replqueue.threshold-pct that determines the fraction of blocks for 
which block reports have to be received before the NameNode will start 
initializing the needed replication queues. Once a sufficient number of block 
reports have been received, the queues are initialized while the NameNode is 
still in safe mode. After the queues are initialized, subsequent block reports 
are handled by updating the queues incrementally.

The benefit of this is twofold:
- It allows us to compute the replication queues while we are waiting for the 
last few block reports (when the NameNode is mostly idle). Once these block 
reports have been received, we can then immediately leave safe mode without 
having to wait for the computation of the needed replication queues (which 
requires a full traversal of the blocks map).
- With Raid, it may not be necessary to stay in safe mode until all blocks have 
been reported. Using this change, we could monitor if all of the missing blocks 
can be recreated using parity information and if so leave safe mode early. In 
order for this monitoring to work, we need access to the needed replication 
queues while the NameNode is still in safe mode.


This addresses bug HDFS-1476.
https://issues.apache.org/jira/browse/HDFS-1476


Diffs (updated)
-

  
http://svn.apache.org/repos/asf/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 1035545 
  
http://svn.apache.org/repos/asf/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/BlockManager.java
 1035545 
  
http://svn.apache.org/repos/asf/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 1035545 
  
http://svn.apache.org/repos/asf/hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/MiniDFSCluster.java
 1035545 
  
http://svn.apache.org/repos/asf/hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java
 1035545 

Diff: https://reviews.apache.org/r/105/diff


Testing
---

new test case in TestListCorruptFileBlocks


Thanks,

Patrick



Re: transferToAllowed

2010-11-19 Thread Raghu Angadi
When it is set to true (default), DN uses JDK's transferTo() interface to
avoid extra buffer copies.

It is possible though rare, some specific platform may not support this
efficiently or is unstable. The config exists to let users disable this
optimization. We have not seen any users that have disabled this though...

Raghu.

On Sun, Nov 14, 2010 at 9:56 AM, Thanh Do  wrote:

> got it here
> https://issues.apache.org/jira/browse/HADOOP-3164
>
>
> On Sun, Nov 14, 2010 at 11:31 AM, Thanh Do  wrote:
>
>> Hi all,
>>
>> Can somebody let me know what is this
>> parameter used for:
>>
>> dfs.datanode.transferTo.allowed
>>
>> It is not in default config,
>> and the maxChunksPerPacket depends on it.
>>
>> Thanks so much.
>> Thanh
>>
>
>


[jira] Resolved: (HDFS-1467) Append pipeline never succeeds with more than one replica

2010-11-19 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins resolved HDFS-1467.
---

   Resolution: Fixed
Fix Version/s: 0.23.0
   0.22.0

I've committed this to trunk and branch 22. Thanks Todd.

> Append pipeline never succeeds with more than one replica
> -
>
> Key: HDFS-1467
> URL: https://issues.apache.org/jira/browse/HDFS-1467
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 0.22.0, 0.23.0
>
> Attachments: failed-TestPipelines.txt, hdfs-1467-fixed.txt, 
> hdfs-1467.txt
>
>
> TestPipelines appears to be failing on trunk:
> Should be RBW replica after sequence of calls append()/write()/hflush() 
> expected: but was:
> junit.framework.AssertionFailedError: Should be RBW replica after sequence of 
> calls append()/write()/hflush() expected: but was:
> at 
> org.apache.hadoop.hdfs.TestPipelines.pipeline_01(TestPipelines.java:109)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Build failed in Hudson: Hadoop-Hdfs-trunk #493

2010-11-19 Thread Apache Hudson Server
See 

Changes:

[nigel] Fix bug in tar file name

[nigel] Add some comments to commitBuild.sh and put artifacts in a single 
directory that can be cleaned up.

[eli] HDFS-1467. Append pipeline never succeeds with more than one replica. 
Contributed by Todd Lipcon

[cos] HDFS-1167. New property for local conf directory in system-test-hdfs.xml 
file. Contributed by Vinay Thota.

--
[...truncated 843 lines...]
A src/c++/libhdfs/hdfsJniHelper.c
AUsrc/c++/libhdfs/Makefile.am
A src/c++/libhdfs/missing
A src/c++/libhdfs/hdfs.h
A src/c++/libhdfs/hdfsJniHelper.h
A src/c++/libhdfs/aclocal.m4
A src/c++/libhdfs/install-sh
A src/docs
A src/docs/forrest.properties
A src/docs/status.xml
A src/docs/src
A src/docs/src/documentation
A src/docs/src/documentation/conf
A src/docs/src/documentation/conf/cli.xconf
A src/docs/src/documentation/skinconf.xml
A src/docs/src/documentation/content
A src/docs/src/documentation/content/xdocs
A src/docs/src/documentation/content/xdocs/SLG_user_guide.xml
A src/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml
A src/docs/src/documentation/content/xdocs/site.xml
A src/docs/src/documentation/content/xdocs/faultinject_framework.xml
A src/docs/src/documentation/content/xdocs/hdfsproxy.xml
A src/docs/src/documentation/content/xdocs/index.xml
A src/docs/src/documentation/content/xdocs/hdfs_imageviewer.xml
A src/docs/src/documentation/content/xdocs/tabs.xml
A src/docs/src/documentation/content/xdocs/libhdfs.xml
A src/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
A src/docs/src/documentation/content/xdocs/hdfs_design.xml
A src/docs/src/documentation/content/xdocs/hdfs_user_guide.xml
A src/docs/src/documentation/resources
A src/docs/src/documentation/resources/images
AUsrc/docs/src/documentation/resources/images/hdfsdatanodes.odg
AUsrc/docs/src/documentation/resources/images/request-identify.jpg
AUsrc/docs/src/documentation/resources/images/architecture.gif
AUsrc/docs/src/documentation/resources/images/hadoop-logo-big.jpg
AUsrc/docs/src/documentation/resources/images/hadoop-logo.jpg
AUsrc/docs/src/documentation/resources/images/core-logo.gif
AUsrc/docs/src/documentation/resources/images/hdfsdatanodes.png
AUsrc/docs/src/documentation/resources/images/hdfsarchitecture.gif
AUsrc/docs/src/documentation/resources/images/FI-framework.gif
AUsrc/docs/src/documentation/resources/images/favicon.ico
AUsrc/docs/src/documentation/resources/images/hdfsarchitecture.odg
AUsrc/docs/src/documentation/resources/images/FI-framework.odg
AUsrc/docs/src/documentation/resources/images/hdfs-logo.jpg
AUsrc/docs/src/documentation/resources/images/hdfsproxy-forward.jpg
AUsrc/docs/src/documentation/resources/images/hdfsproxy-server.jpg
AUsrc/docs/src/documentation/resources/images/hdfsproxy-overview.jpg
AUsrc/docs/src/documentation/resources/images/hdfsarchitecture.png
AUsrc/docs/src/documentation/resources/images/hdfsdatanodes.gif
A src/docs/src/documentation/README.txt
A src/docs/src/documentation/classes
A src/docs/src/documentation/classes/CatalogManager.properties
A src/docs/changes
A src/docs/changes/ChangesFancyStyle.css
AUsrc/docs/changes/changes2html.pl
A src/docs/changes/ChangesSimpleStyle.css
A src/docs/releasenotes.html
A bin
A bin/hdfs-config.sh
AUbin/start-dfs.sh
AUbin/stop-balancer.sh
AUbin/hdfs
A bin/stop-secure-dns.sh
AUbin/stop-dfs.sh
AUbin/start-balancer.sh
A bin/start-secure-dns.sh
AUbuild.xml
 U.
Fetching 'https://svn.apache.org/repos/asf/hadoop/common/trunk/src/test/bin' at 
-1 into 
'
AUsrc/test/bin/test-patch.sh
At revision 1037129
At revision 1037129
Checking out http://svn.apache.org/repos/asf/hadoop/nightly
A commitBuild.sh
A hudsonEnv.sh
AUhudsonBuildHadoopNightly.sh
AUhudsonBuildHadoopPatch.sh
AUhudsonBuildHadoopRelease.sh
AUprocessHadoopPatchEmailRemote.sh
AUhudsonPatchQueueAdmin.sh
AUprocessHadoopPatchEmail.sh
A README.txt
A test-patch
A test-patch/test-patch.sh
At revision 1037129
no change for https://svn.apache.org/repos/asf/hadoop/common/trunk/src/test/bin 
since the previous build
[Hadoop-Hdfs-trunk] $ /bin/bash /tmp/hudson4036872645832535124.sh


==

Build failed in Hudson: Hadoop-Hdfs-trunk-Commit #463

2010-11-19 Thread Apache Hudson Server
See 

--
[...truncated 2436 lines...]

ivy-init:

ivy-resolve-common:

ivy-retrieve-common:

init:
[touch] Creating /tmp/null286071475
   [delete] Deleting: /tmp/null286071475

compile-hdfs-classes:
[paranamer] Generating parameter names from 

 to 

[paranamer] Generating parameter names from 

 to 


compile-core:

jar:
  [jar] Building jar: 


findbugs:
[mkdir] Created dir: 

 [findbugs] Executing findbugs from ant task
 [findbugs] Running FindBugs...
 [findbugs] Calculating exit code...
 [findbugs] Exit code set to: 0
 [findbugs] Output saved to 

 [xslt] Processing 

 to 

 [xslt] Loading stylesheet 
/homes/gkesavan/tools/findbugs/latest/src/xsl/default.xsl

BUILD SUCCESSFUL
Total time: 2 minutes 48 seconds


==
==
STORE: saving artifacts
==
==




==
==
CLEAN: cleaning workspace
==
==


Buildfile: build.xml

clean-contrib:

clean:

check-libhdfs-fuse:

clean:
Trying to override old definition of task macro_tar

clean:
 [echo] contrib: hdfsproxy
   [delete] Deleting directory 


clean:
 [echo] contrib: thriftfs
   [delete] Deleting directory 


clean-fi:
   [delete] Deleting directory 


clean-sign:

clean:
   [delete] Deleting directory 

   [delete] Deleting directory 

   [delete] Deleting: 

   [delete] Deleting: 

   [delete] Deleting: 

   [delete] Deleting: 


BUILD SUCCESSFUL
Total time: 1 second


==
==
ANALYSIS: ant -Drun.clover=true clover checkstyle run-commit-test 
generate-clover-reports -Dtest.junit.output.format=xml -Dtest.output=no 
-Dcompile.c++=true -Dcompile.native=true -Dfindbugs.home=$FINDBUGS_HOME 
-Djava5.home=$JAVA5_HOME -Dforrest.home=$FORREST_HOME 
-Dclover.home=$CLOVER_HOME -Declipse.home=$ECLIPSE_HOME
==
==


Buildfile: build.xml

clover.setup:
[mkdir] Created dir: 

[clover-setup] Clover Version 3.0.2, built on April 13 2010 (build-790)
[clover-setup] Loaded from: /homes/hudson/tools/clover/latest/lib/clover.jar
[clover-setup] Clover: Open Source License registered to Apache.
[clover-setup] Clover is enabled with initstring 
'
 [echo] HDFS-783: test-libhdfs 

Build failed in Hudson: Hadoop-Hdfs-trunk-Commit #464

2010-11-19 Thread Apache Hudson Server
See 

--
[...truncated 2440 lines...]

ivy-init:

ivy-resolve-common:

ivy-retrieve-common:

init:
[touch] Creating /tmp/null1921267949
   [delete] Deleting: /tmp/null1921267949

compile-hdfs-classes:
[paranamer] Generating parameter names from 

 to 

[paranamer] Generating parameter names from 

 to 


compile-core:

jar:
  [jar] Building jar: 


findbugs:
[mkdir] Created dir: 

 [findbugs] Executing findbugs from ant task
 [findbugs] Running FindBugs...
 [findbugs] Calculating exit code...
 [findbugs] Exit code set to: 0
 [findbugs] Output saved to 

 [xslt] Processing 

 to 

 [xslt] Loading stylesheet 
/homes/gkesavan/tools/findbugs/latest/src/xsl/default.xsl

BUILD SUCCESSFUL
Total time: 2 minutes 52 seconds


==
==
STORE: saving artifacts
==
==




==
==
CLEAN: cleaning workspace
==
==


Buildfile: build.xml

clean-contrib:

clean:

check-libhdfs-fuse:

clean:
Trying to override old definition of task macro_tar

clean:
 [echo] contrib: hdfsproxy
   [delete] Deleting directory 


clean:
 [echo] contrib: thriftfs
   [delete] Deleting directory 


clean-fi:
   [delete] Deleting directory 


clean-sign:

clean:
   [delete] Deleting directory 

   [delete] Deleting directory 

   [delete] Deleting: 

   [delete] Deleting: 

   [delete] Deleting: 

   [delete] Deleting: 


BUILD SUCCESSFUL
Total time: 1 second


==
==
ANALYSIS: ant -Drun.clover=true clover checkstyle run-commit-test 
generate-clover-reports -Dtest.junit.output.format=xml -Dtest.output=no 
-Dcompile.c++=true -Dcompile.native=true -Dfindbugs.home=$FINDBUGS_HOME 
-Djava5.home=$JAVA5_HOME -Dforrest.home=$FORREST_HOME 
-Dclover.home=$CLOVER_HOME -Declipse.home=$ECLIPSE_HOME
==
==


Buildfile: build.xml

clover.setup:
[mkdir] Created dir: 

[clover-setup] Clover Version 3.0.2, built on April 13 2010 (build-790)
[clover-setup] Loaded from: /homes/hudson/tools/clover/latest/lib/clover.jar
[clover-setup] Clover: Open Source License registered to Apache.
[clover-setup] Clover is enabled with initstring 
'
 [echo] HDFS-783: test-libhdf