Hadoop-Hdfs-22-branch - Build # 45 - Still Failing

2011-05-13 Thread Apache Jenkins Server
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-22-branch/45/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 2130 lines...]
[artifact:install-provider] 1 required artifact is missing.
[artifact:install-provider] 
[artifact:install-provider] for artifact: 
[artifact:install-provider]   unspecified:unspecified:jar:0.0
[artifact:install-provider] 
[artifact:install-provider] from the specified remote repositories:
[artifact:install-provider]   central (http://repo1.maven.org/maven2)
[artifact:install-provider] 
[artifact:install-provider] 

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:1667:
 Error downloading wagon provider from the remote repository: Missing:
--
1) org.apache.maven.wagon:wagon-http:jar:1.0-beta-2

  Try downloading the file manually from the project website.

  Then, install it using the command: 
  mvn install:install-file -DgroupId=org.apache.maven.wagon 
-DartifactId=wagon-http -Dversion=1.0-beta-2 -Dpackaging=jar 
-Dfile=/path/to/file

  Alternatively, if you host your own repository you can deploy the file there: 
  mvn deploy:deploy-file -DgroupId=org.apache.maven.wagon 
-DartifactId=wagon-http -Dversion=1.0-beta-2 -Dpackaging=jar 
-Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id]

  Path to dependency: 
1) unspecified:unspecified:jar:0.0
2) org.apache.maven.wagon:wagon-http:jar:1.0-beta-2

--
1 required artifact is missing.

for artifact: 
  unspecified:unspecified:jar:0.0

from the specified remote repositories:
  central (http://repo1.maven.org/maven2)



Total time: 1 minute 19 seconds


==
==
STORE: saving artifacts
==
==


mv: cannot stat `build/test/findbugs': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.


Hadoop-Hdfs-trunk - Build # 665 - Still Failing

2011-05-13 Thread Apache Jenkins Server
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/665/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 772932 lines...]
[junit] 
[junit] 2011-05-13 12:44:12,753 INFO  ipc.Server (Server.java:run(691)) - 
Stopping IPC Server Responder
[junit] 2011-05-13 12:44:12,756 INFO  datanode.DataNode 
(DataNode.java:shutdown(1638)) - Waiting for threadgroup to exit, active 
threads is 0
[junit] 2011-05-13 12:44:12,756 WARN  datanode.DataNode 
(DataNode.java:offerService(1065)) - BPOfferService for block 
pool=BP-1832701016-127.0.1.1-1305290651387 received 
exception:java.lang.InterruptedException
[junit] 2011-05-13 12:44:12,756 WARN  datanode.DataNode 
(DataNode.java:run(1218)) - DatanodeRegistration(127.0.0.1:40251, 
storageID=DS-267941938-127.0.1.1-40251-1305290652092, infoPort=53682, 
ipcPort=58365, storageInfo=lv=-35;cid=testClusterID;nsid=1349710115;c=0) ending 
block pool service for: BP-1832701016-127.0.1.1-1305290651387
[junit] 2011-05-13 12:44:12,756 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:removeBlockPool(277)) - Removed 
bpid=BP-1832701016-127.0.1.1-1305290651387 from blockPoolScannerMap
[junit] 2011-05-13 12:44:12,756 INFO  datanode.DataNode 
(FSDataset.java:shutdownBlockPool(2560)) - Removing block pool 
BP-1832701016-127.0.1.1-1305290651387
[junit] 2011-05-13 12:44:12,756 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk 
service threads...
[junit] 2011-05-13 12:44:12,757 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads 
have been shut down.
[junit] 2011-05-13 12:44:12,757 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdownDataNodes(1043)) - Shutting down DataNode 0
[junit] 2011-05-13 12:44:12,757 WARN  datanode.DirectoryScanner 
(DirectoryScanner.java:shutdown(297)) - DirectoryScanner: shutdown has been 
called
[junit] 2011-05-13 12:44:12,758 INFO  datanode.BlockPoolSliceScanner 
(BlockPoolSliceScanner.java:startNewPeriod(591)) - Starting a new period : work 
left in prev period : 0.00%
[junit] 2011-05-13 12:44:12,858 INFO  ipc.Server (Server.java:stop(1629)) - 
Stopping server on 51070
[junit] 2011-05-13 12:44:12,858 INFO  ipc.Server (Server.java:run(1464)) - 
IPC Server handler 0 on 51070: exiting
[junit] 2011-05-13 12:44:12,858 INFO  ipc.Server (Server.java:run(487)) - 
Stopping IPC Server listener on 51070
[junit] 2011-05-13 12:44:12,858 INFO  datanode.DataNode 
(DataNode.java:shutdown(1638)) - Waiting for threadgroup to exit, active 
threads is 1
[junit] 2011-05-13 12:44:12,858 INFO  ipc.Server (Server.java:run(691)) - 
Stopping IPC Server Responder
[junit] 2011-05-13 12:44:12,859 WARN  datanode.DataNode 
(DataXceiverServer.java:run(143)) - 127.0.0.1:51410:DataXceiveServer: 
java.nio.channels.AsynchronousCloseException
[junit] at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
[junit] at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:136)
[junit] at java.lang.Thread.run(Thread.java:662)
[junit] 
[junit] 2011-05-13 12:44:12,861 INFO  datanode.DataNode 
(DataNode.java:shutdown(1638)) - Waiting for threadgroup to exit, active 
threads is 0
[junit] 2011-05-13 12:44:12,861 WARN  datanode.DataNode 
(DataNode.java:offerService(1065)) - BPOfferService for block 
pool=BP-1832701016-127.0.1.1-1305290651387 received 
exception:java.lang.InterruptedException
[junit] 2011-05-13 12:44:12,861 WARN  datanode.DataNode 
(DataNode.java:run(1218)) - DatanodeRegistration(127.0.0.1:51410, 
storageID=DS-1845679539-127.0.1.1-51410-1305290651977, infoPort=40982, 
ipcPort=51070, storageInfo=lv=-35;cid=testClusterID;nsid=1349710115;c=0) ending 
block pool service for: BP-1832701016-127.0.1.1-1305290651387
[junit] 2011-05-13 12:44:12,961 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:removeBlockPool(277)) - Removed 
bpid=BP-1832701016-127.0.1.1-1305290651387 from blockPoolScannerMap
[junit] 2011-05-13 12:44:12,961 INFO  datanode.DataNode 
(FSDataset.java:shutdownBlockPool(2560)) - Removing block pool 
BP-1832701016-127.0.1.1-1305290651387
[junit] 2011-05-13 12:44:12,961 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk 
service threads...
[junit] 2011-05-13 12:44:12,962 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads 
have been shut down.
[junit] 2011-05-13 12:44:13,063 WARN  namen

[jira] [Created] (HDFS-1936) Updating the layout version from HDFS-1822 causes problems where logic depends on layout version

2011-05-13 Thread Suresh Srinivas (JIRA)
Updating the layout version from HDFS-1822 causes problems where logic depends 
on layout version


 Key: HDFS-1936
 URL: https://issues.apache.org/jira/browse/HDFS-1936
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.22.0, 0.23.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Blocker
 Fix For: 0.22.0, 0.23.0


In HDFS_1822 and HDFS_1842, the layout versions for 203, 204, 22 and trunk were 
changed. Some of the namenode logic that depends on layout version is broken 
because of this. Read the comment for more description.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hadoop-Hdfs-trunk-Commit - Build # 645 - Still Failing

2011-05-13 Thread Apache Jenkins Server
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/645/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 1347 lines...]
 [echo] Start weaving aspects in place
 [echo] Weaving of aspects is finished

ivy-download:
  [get] Getting: 
http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
  [get] To: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy/ivy-2.1.0.jar
  [get] Not modified - so not downloaded

ivy-init-dirs:

ivy-probe-antlib:

ivy-init-antlib:

ivy-init:
[ivy:configure] :: loading settings :: file = 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy/ivysettings.xml

ivy-resolve-common:

ivy-retrieve-common:
[ivy:cachepath] DEPRECATED: 'ivy.conf.file' is deprecated, use 
'ivy.settings.file' instead
[ivy:cachepath] :: loading settings :: file = 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy/ivysettings.xml

ivy-resolve-system:

ivy-retrieve-system:

-compile-test-system.wrapper:
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
[javac] Compiling 1 source file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/test/aop/build/aop.xml:183:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/test/aop/build/aop.xml:193:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:449:
 Reference ivy-hdfs.classpath not found.

Total time: 56 seconds


==
==
STORE: saving artifacts
==
==


mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.


Hadoop-Hdfs-trunk-Commit - Build # 646 - Still Failing

2011-05-13 Thread Apache Jenkins Server
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/646/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 1346 lines...]
 [echo] Start weaving aspects in place
 [echo] Weaving of aspects is finished

ivy-download:
  [get] Getting: 
http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
  [get] To: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy/ivy-2.1.0.jar
  [get] Not modified - so not downloaded

ivy-init-dirs:

ivy-probe-antlib:

ivy-init-antlib:

ivy-init:
[ivy:configure] :: loading settings :: file = 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy/ivysettings.xml

ivy-resolve-common:

ivy-retrieve-common:
[ivy:cachepath] DEPRECATED: 'ivy.conf.file' is deprecated, use 
'ivy.settings.file' instead
[ivy:cachepath] :: loading settings :: file = 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy/ivysettings.xml

ivy-resolve-system:

ivy-retrieve-system:

-compile-test-system.wrapper:
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
[javac] Compiling 1 source file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/test/aop/build/aop.xml:183:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/test/aop/build/aop.xml:193:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:449:
 Reference ivy-hdfs.classpath not found.

Total time: 55 seconds


==
==
STORE: saving artifacts
==
==


mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.


Hadoop-Hdfs-trunk-Commit - Build # 647 - Still Failing

2011-05-13 Thread Apache Jenkins Server
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/647/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 1347 lines...]
 [echo] Start weaving aspects in place
 [echo] Weaving of aspects is finished

ivy-download:
  [get] Getting: 
http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
  [get] To: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy/ivy-2.1.0.jar
  [get] Not modified - so not downloaded

ivy-init-dirs:

ivy-probe-antlib:

ivy-init-antlib:

ivy-init:
[ivy:configure] :: loading settings :: file = 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy/ivysettings.xml

ivy-resolve-common:

ivy-retrieve-common:
[ivy:cachepath] DEPRECATED: 'ivy.conf.file' is deprecated, use 
'ivy.settings.file' instead
[ivy:cachepath] :: loading settings :: file = 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy/ivysettings.xml

ivy-resolve-system:

ivy-retrieve-system:

-compile-test-system.wrapper:
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
[javac] Compiling 1 source file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/test/aop/build/aop.xml:183:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/test/aop/build/aop.xml:193:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:449:
 Reference ivy-hdfs.classpath not found.

Total time: 53 seconds


==
==
STORE: saving artifacts
==
==


mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.


[jira] [Created] (HDFS-1937) Improve DataTransferProtocol

2011-05-13 Thread Tsz Wo (Nicholas), SZE (JIRA)
Improve DataTransferProtocol


 Key: HDFS-1937
 URL: https://issues.apache.org/jira/browse/HDFS-1937
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node, hdfs client
Reporter: Tsz Wo (Nicholas), SZE


This is an umbrella JIRA for improving {{DataTransferProtocol}}.

{{DataTransferProtocol}} is implemented using socket directly and the codes are 
distributed among datanode classes and {{DFSClient}} classes.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HDFS-1917) Clean up duplication of dependent jar files

2011-05-13 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reopened HDFS-1917:
---


> Clean up duplication of dependent jar files
> ---
>
> Key: HDFS-1917
> URL: https://issues.apache.org/jira/browse/HDFS-1917
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0
> Environment: Java 6, RHEL 5.5
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.23.0
>
> Attachments: HDFS-1917-1.patch, HDFS-1917.patch
>
>
> For trunk, the build and deployment tree look like this:
> hadoop-common-0.2x.y
> hadoop-hdfs-0.2x.y
> hadoop-mapred-0.2x.y
> Technically, hdfs's the third party dependent jar files should be fetch from 
> hadoop-common.  However, it is currently fetching from hadoop-hdfs/lib only.  
> It would be nice to eliminate the need to repeat duplicated jar files at 
> build time.
> There are two options to manage this dependency list, continue to enhance ant 
> build structure to fetch and filter jar file dependencies using ivy.  On the 
> other hand, it would be a good opportunity to convert the build structure to 
> maven, and use maven to manage the provided jar files.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-1938) Reference ivy-hdfs.classpath not found.

2011-05-13 Thread Tsz Wo (Nicholas), SZE (JIRA)
 Reference ivy-hdfs.classpath not found.


 Key: HDFS-1938
 URL: https://issues.apache.org/jira/browse/HDFS-1938
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Eric Yang
Priority: Minor


{noformat}
$ant test-system
...
BUILD FAILED
/export/crawlspace/tsz/hdfs/h1/src/test/aop/build/aop.xml:129: The following 
error occurred while executing this line:
/export/crawlspace/tsz/hdfs/h1/src/test/aop/build/aop.xml:183: The following 
error occurred while executing this line:
/export/crawlspace/tsz/hdfs/h1/src/test/aop/build/aop.xml:193: The following 
error occurred while executing this line:
/export/crawlspace/tsz/hdfs/h1/build.xml:449: Reference ivy-hdfs.classpath not 
found.
{noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-1917) Clean up duplication of dependent jar files

2011-05-13 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-1917.
--

Resolution: Fixed

Todd, good catch.  Filed HDFS-1938 and Eric is looking at it.

> Clean up duplication of dependent jar files
> ---
>
> Key: HDFS-1917
> URL: https://issues.apache.org/jira/browse/HDFS-1917
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0
> Environment: Java 6, RHEL 5.5
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.23.0
>
> Attachments: HDFS-1917-1.patch, HDFS-1917.patch
>
>
> For trunk, the build and deployment tree look like this:
> hadoop-common-0.2x.y
> hadoop-hdfs-0.2x.y
> hadoop-mapred-0.2x.y
> Technically, hdfs's the third party dependent jar files should be fetch from 
> hadoop-common.  However, it is currently fetching from hadoop-hdfs/lib only.  
> It would be nice to eliminate the need to repeat duplicated jar files at 
> build time.
> There are two options to manage this dependency list, continue to enhance ant 
> build structure to fetch and filter jar file dependencies using ivy.  On the 
> other hand, it would be a good opportunity to convert the build structure to 
> maven, and use maven to manage the provided jar files.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hadoop-Hdfs-trunk-Commit - Build # 648 - Still Failing

2011-05-13 Thread Apache Jenkins Server
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/648/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 1424 lines...]
[artifact:install-provider] for artifact: 
[artifact:install-provider]   unspecified:unspecified:jar:0.0
[artifact:install-provider] 
[artifact:install-provider] from the specified remote repositories:
[artifact:install-provider]   central (http://repo1.maven.org/maven2)
[artifact:install-provider] 
[artifact:install-provider] 

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1681:
 Error downloading wagon provider from the remote repository: Missing:
--
1) org.apache.maven.wagon:wagon-http:jar:1.0-beta-2

  Try downloading the file manually from the project website.

  Then, install it using the command: 
  mvn install:install-file -DgroupId=org.apache.maven.wagon 
-DartifactId=wagon-http -Dversion=1.0-beta-2 -Dpackaging=jar 
-Dfile=/path/to/file

  Alternatively, if you host your own repository you can deploy the file there: 
  mvn deploy:deploy-file -DgroupId=org.apache.maven.wagon 
-DartifactId=wagon-http -Dversion=1.0-beta-2 -Dpackaging=jar 
-Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id]

  Path to dependency: 
1) unspecified:unspecified:jar:0.0
2) org.apache.maven.wagon:wagon-http:jar:1.0-beta-2

--
1 required artifact is missing.

for artifact: 
  unspecified:unspecified:jar:0.0

from the specified remote repositories:
  central (http://repo1.maven.org/maven2)



Total time: 56 seconds


==
==
STORE: saving artifacts
==
==


mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.


[jira] [Resolved] (HDFS-1938) Reference ivy-hdfs.classpath not found.

2011-05-13 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-1938.
--

   Resolution: Fixed
Fix Version/s: 0.23.0

Skipping Hudson since it won't detect this.

I have committed this.  Thanks, Eric!

>  Reference ivy-hdfs.classpath not found.
> 
>
> Key: HDFS-1938
> URL: https://issues.apache.org/jira/browse/HDFS-1938
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Eric Yang
>Priority: Minor
> Fix For: 0.23.0
>
> Attachments: HDFS-1938.patch
>
>
> {noformat}
> $ant test-system
> ...
> BUILD FAILED
> /export/crawlspace/tsz/hdfs/h1/src/test/aop/build/aop.xml:129: The following 
> error occurred while executing this line:
> /export/crawlspace/tsz/hdfs/h1/src/test/aop/build/aop.xml:183: The following 
> error occurred while executing this line:
> /export/crawlspace/tsz/hdfs/h1/src/test/aop/build/aop.xml:193: The following 
> error occurred while executing this line:
> /export/crawlspace/tsz/hdfs/h1/build.xml:449: Reference ivy-hdfs.classpath 
> not found.
> {noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-1939) ivy: test conf should not extend common conf

2011-05-13 Thread Tsz Wo (Nicholas), SZE (JIRA)
ivy: test conf should not extend common conf


 Key: HDFS-1939
 URL: https://issues.apache.org/jira/browse/HDFS-1939
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Eric Yang


Similar improvement as HADOOP-7289.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-1940) Datanode can have more than one copy of same block when a failed disk is coming back in datanode

2011-05-13 Thread Rajit (JIRA)
Datanode can have more than one copy of same block when a failed disk is coming 
back in datanode


 Key: HDFS-1940
 URL: https://issues.apache.org/jira/browse/HDFS-1940
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20.204.0
Reporter: Rajit


There is a situation where one datanode can have more than one copy of same 
block due to a disk fails and comes back after sometime in a datanode. And 
these duplicate blocks are not getting deleted even after datanode and namenode 
restart.

This situation can only happen in a corner case , when due to disk failure, the 
data block is replicated to other disk of the same datanode.


To simulate this scenario I copied a datablock and the associated .meta file 
from one disk to another disk of same datanode, so the datanode is having 2 
copy of same replica. Now I restarted datanode and namenode. Still the extra 
data block and meta file is not deleted from the datanode

[hdfs@gsbl90192 rajsaha]$ ls -l `find 
/grid/{0,1,2,3}/hadoop/var/hdfs/data/current -name blk_*`
-rw-r--r-- 1 hdfs users 7814 May 13 21:05 
/grid/1/hadoop/var/hdfs/data/current/blk_1727421609840461376
-rw-r--r-- 1 hdfs users   71 May 13 21:05 
/grid/1/hadoop/var/hdfs/data/current/blk_1727421609840461376_579992.meta
-rw-r--r-- 1 hdfs users 7814 May 13 21:14 
/grid/3/hadoop/var/hdfs/data/current/blk_1727421609840461376
-rw-r--r-- 1 hdfs users   71 May 13 21:14 
/grid/3/hadoop/var/hdfs/data/current/blk_1727421609840461376_579992.meta

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hadoop-Hdfs-trunk-Commit - Build # 649 - Still Failing

2011-05-13 Thread Apache Jenkins Server
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/649/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 1424 lines...]
[artifact:install-provider] for artifact: 
[artifact:install-provider]   unspecified:unspecified:jar:0.0
[artifact:install-provider] 
[artifact:install-provider] from the specified remote repositories:
[artifact:install-provider]   central (http://repo1.maven.org/maven2)
[artifact:install-provider] 
[artifact:install-provider] 

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1681:
 Error downloading wagon provider from the remote repository: Missing:
--
1) org.apache.maven.wagon:wagon-http:jar:1.0-beta-2

  Try downloading the file manually from the project website.

  Then, install it using the command: 
  mvn install:install-file -DgroupId=org.apache.maven.wagon 
-DartifactId=wagon-http -Dversion=1.0-beta-2 -Dpackaging=jar 
-Dfile=/path/to/file

  Alternatively, if you host your own repository you can deploy the file there: 
  mvn deploy:deploy-file -DgroupId=org.apache.maven.wagon 
-DartifactId=wagon-http -Dversion=1.0-beta-2 -Dpackaging=jar 
-Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id]

  Path to dependency: 
1) unspecified:unspecified:jar:0.0
2) org.apache.maven.wagon:wagon-http:jar:1.0-beta-2

--
1 required artifact is missing.

for artifact: 
  unspecified:unspecified:jar:0.0

from the specified remote repositories:
  central (http://repo1.maven.org/maven2)



Total time: 55 seconds


==
==
STORE: saving artifacts
==
==


mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.


Hadoop-Hdfs-trunk-Commit - Build # 650 - Still Failing

2011-05-13 Thread Apache Jenkins Server
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/650/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 1425 lines...]
[artifact:install-provider] for artifact: 
[artifact:install-provider]   unspecified:unspecified:jar:0.0
[artifact:install-provider] 
[artifact:install-provider] from the specified remote repositories:
[artifact:install-provider]   central (http://repo1.maven.org/maven2)
[artifact:install-provider] 
[artifact:install-provider] 

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1681:
 Error downloading wagon provider from the remote repository: Missing:
--
1) org.apache.maven.wagon:wagon-http:jar:1.0-beta-2

  Try downloading the file manually from the project website.

  Then, install it using the command: 
  mvn install:install-file -DgroupId=org.apache.maven.wagon 
-DartifactId=wagon-http -Dversion=1.0-beta-2 -Dpackaging=jar 
-Dfile=/path/to/file

  Alternatively, if you host your own repository you can deploy the file there: 
  mvn deploy:deploy-file -DgroupId=org.apache.maven.wagon 
-DartifactId=wagon-http -Dversion=1.0-beta-2 -Dpackaging=jar 
-Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id]

  Path to dependency: 
1) unspecified:unspecified:jar:0.0
2) org.apache.maven.wagon:wagon-http:jar:1.0-beta-2

--
1 required artifact is missing.

for artifact: 
  unspecified:unspecified:jar:0.0

from the specified remote repositories:
  central (http://repo1.maven.org/maven2)



Total time: 54 seconds


==
==
STORE: saving artifacts
==
==


mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.


Hadoop-Hdfs-trunk-Commit - Build # 651 - Still Failing

2011-05-13 Thread Apache Jenkins Server
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/651/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 1424 lines...]
[artifact:install-provider] for artifact: 
[artifact:install-provider]   unspecified:unspecified:jar:0.0
[artifact:install-provider] 
[artifact:install-provider] from the specified remote repositories:
[artifact:install-provider]   central (http://repo1.maven.org/maven2)
[artifact:install-provider] 
[artifact:install-provider] 

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1681:
 Error downloading wagon provider from the remote repository: Missing:
--
1) org.apache.maven.wagon:wagon-http:jar:1.0-beta-2

  Try downloading the file manually from the project website.

  Then, install it using the command: 
  mvn install:install-file -DgroupId=org.apache.maven.wagon 
-DartifactId=wagon-http -Dversion=1.0-beta-2 -Dpackaging=jar 
-Dfile=/path/to/file

  Alternatively, if you host your own repository you can deploy the file there: 
  mvn deploy:deploy-file -DgroupId=org.apache.maven.wagon 
-DartifactId=wagon-http -Dversion=1.0-beta-2 -Dpackaging=jar 
-Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id]

  Path to dependency: 
1) unspecified:unspecified:jar:0.0
2) org.apache.maven.wagon:wagon-http:jar:1.0-beta-2

--
1 required artifact is missing.

for artifact: 
  unspecified:unspecified:jar:0.0

from the specified remote repositories:
  central (http://repo1.maven.org/maven2)



Total time: 55 seconds


==
==
STORE: saving artifacts
==
==


mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.


[jira] [Created] (HDFS-1941) Remove -genclusterid from NameNode startup options

2011-05-13 Thread Bharath Mundlapudi (JIRA)
Remove -genclusterid from NameNode startup options
--

 Key: HDFS-1941
 URL: https://issues.apache.org/jira/browse/HDFS-1941
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
Priority: Minor


Currently, namenode -genclusterid is a helper utility to generate unique 
clusterid. This option is useless once namenode -format automatically generates 
the clusterid.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira