Re: Review Request: DFSClient.getBlockLocations returns BlockLocations with no indication that the corresponding blocks are corrupt

2010-11-04 Thread Todd Lipcon
Hi Arun,

I filed https://issues.apache.org/jira/browse/INFRA-3153

Thanks
-Todd

On Wed, Nov 3, 2010 at 9:47 PM, Arun C Murthy  wrote:

>
> On Nov 3, 2010, at 5:03 PM, Todd Lipcon wrote:
>
>  I wrote such a procmail script for review.hbase.org and posted it for the
>> ASF Infra guys a few weeks ago. We can file a new INFRA JIRA to get them
>> to
>> install/configure it.
>>
>>
> +1, thanks Todd!
>
>


-- 
Todd Lipcon
Software Engineer, Cloudera


Build failed in Hudson: Hadoop-Hdfs-trunk #477

2010-11-04 Thread Apache Hudson Server
See 

--
[...truncated 764810 lines...]
[junit] at 
sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
[junit] at 
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:60)
[junit] at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
[junit] at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:151)
[junit] at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:112)
[junit] at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:105)
[junit] at 
java.io.DataOutputStream.writeShort(DataOutputStream.java:150)
[junit] at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Status.write(DataTransferProtocol.java:120)
[junit] at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.write(DataTransferProtocol.java:545)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.write_aroundBody0(BlockReceiver.java:931)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.write_aroundBody1$advice(BlockReceiver.java:160)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:931)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit] 2010-11-04 15:33:41,379 WARN  datanode.DataNode 
(DataNode.java:checkDiskError(828)) - checkDiskError: exception: 
[junit] java.io.IOException: Connection reset by peer
[junit] at sun.nio.ch.FileDispatcher.write0(Native Method)
[junit] at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
[junit] at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104)
[junit] at sun.nio.ch.IOUtil.write(IOUtil.java:75)
[junit] at 
sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
[junit] at 
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:60)
[junit] at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
[junit] at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:151)
[junit] at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:112)
[junit] at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:105)
[junit] at 
java.io.DataOutputStream.writeShort(DataOutputStream.java:150)
[junit] at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Status.write(DataTransferProtocol.java:120)
[junit] at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.write(DataTransferProtocol.java:545)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.write_aroundBody0(BlockReceiver.java:931)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.write_aroundBody1$advice(BlockReceiver.java:160)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:931)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit] 2010-11-04 15:33:41,381 INFO  datanode.DataNode 
(BlockReceiver.java:run(955)) - PacketResponder blk_4909546972313332591_1001 2 
Exception java.io.IOException: Connection reset by peer
[junit] at sun.nio.ch.FileDispatcher.write0(Native Method)
[junit] at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
[junit] at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104)
[junit] at sun.nio.ch.IOUtil.write(IOUtil.java:75)
[junit] at 
sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
[junit] at 
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:60)
[junit] at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
[junit] at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:151)
[junit] at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:112)
[junit] at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:105)
[junit] at 
java.io.DataOutputStream.writeShort(DataOutputStream.java:150)
[junit] at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Status.write(DataTransferProtocol.java:120)
[junit] at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.write(DataTransferProtocol.java:545)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.write_aroundBody0(BlockReceiver.java:931)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.write_aroundBody1$advice(BlockReceiver.java:160)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceive

Why dataOut is FileOutputStream?

2010-11-04 Thread Thanh Do
Hi all,

When a datanode receive a block, the datanode
write the block into 2 streams on disk:
- the data stream (dataOut)
- the checksum stream (checksumOut)

While the checksumOut is created with following code:
   this.checksumOut = new DataOutputStream(new BufferedOutputStream(
  streams.checksumOut,
  SMALL_BUFFER_SIZE));
The dataOut is simply FileOutputStream()

So, the checksumOut is buffered, but dataOut is not.

Is there any particular reason for doing so?
or it doesn't matter, because after that, we flush
the two streams anyway?

Thanks
Thanh