[jira] [Created] (HADOOP-8058) Configure Jenkins to run test from root trunk

2012-02-11 Thread Eric Charles (Created) (JIRA)
Configure Jenkins to run test from root trunk
-

 Key: HADOOP-8058
 URL: https://issues.apache.org/jira/browse/HADOOP-8058
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: hudson
Reporter: Eric Charles


The current Jenkins configuration does a 'cd hadoop-common-project' before 
invoking $MAVEN_HOME/bin/mvn test -Pclover..., this is why we only have a 
partial report (only for hadoop-common). See for example 
https://builds.apache.org/job/Hadoop-Common-trunk/315/testReport/

If we had the complete report, anyone could compare its local tests to the 
Jenkins ones (supposed to be the reference).

If we do this, there can be more Jenkins mails on the lists (unstability, 
errors...), but I find the advantages worth the price.

Thx,
Eric


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8058) Configure Jenkins to run test from root trunk

2012-02-11 Thread Eric Charles (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Charles resolved HADOOP-8058.
--

Resolution: Won't Fix

Sorry, forget this.
There are separate hdfs and mapreduce jobs
https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Hdfs-trunk-Commit/
https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Mapreduce-trunk/

I was confused by the name "hadoop-common", thinking it was aimed to build the 
whole http://svn.apache.org/repos/asf/hadoop/common/trunk/ tree 
('hadoop-common').

Eric

> Configure Jenkins to run test from root trunk
> -
>
> Key: HADOOP-8058
> URL: https://issues.apache.org/jira/browse/HADOOP-8058
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: hudson
>Reporter: Eric Charles
>
> The current Jenkins configuration does a 'cd hadoop-common-project' before 
> invoking $MAVEN_HOME/bin/mvn test -Pclover..., this is why we only have a 
> partial report (only for hadoop-common). See for example 
> https://builds.apache.org/job/Hadoop-Common-trunk/315/testReport/
> If we had the complete report, anyone could compare its local tests to the 
> Jenkins ones (supposed to be the reference).
> If we do this, there can be more Jenkins mails on the lists (unstability, 
> errors...), but I find the advantages worth the price.
> Thx,
> Eric

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Several Unit-test failures in Hadoop 1.0.0 release version

2012-02-11 Thread Grace
Hi all,

 Met with several failures when running 'ant test' under hadoop-1.0.0
version from apache web site. does any one have the similar problems? I did
not get any related information from the google. Is there anything or
configuration I missing?

here is the failure list:

[junit] Running org.apache.hadoop.mapred.TestTTResourceReporting
[junit] Tests run: 3, Failures: 0, Errors: 1, Time elapsed: 179.124 sec


[junit] Running org.apache.hadoop.metrics2.impl.TestGangliaMetrics
[junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 0.14 sec


[junit] Running org.apache.hadoop.metrics2.impl.TestSinkQueue
[junit] Tests run: 8, Failures: 1, Errors: 0, Time elapsed: 0.291 sec


[junit] Running org.apache.hadoop.tools.rumen.TestRumenJobTraces
[junit] Tests run: 9, Failures: 0, Errors: 1, Time elapsed: 41.904 sec


[junit] Running org.apache.hadoop.streaming.TestUlimit
[junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 84.974 sec

-
Detailed exceptions for each of them for your reference
-
TEST-org.apache.hadoop.mapred.TestTTResourceReporting.txt
{{{
Testcase: testConfiguredResourceValues took 103.764 sec
Caused an ERROR
Job failed!
java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265)
at org.apache.hadoop.examples.SleepJob.run(SleepJob.java:174)
at org.apache.hadoop.examples.SleepJob.run(SleepJob.java:237)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at
org.apache.hadoop.mapred.TestTTResourceReporting.runSleepJob(TestTTResourceReporting.java:342)
at
org.apache.hadoop.mapred.TestTTResourceReporting.testConfiguredResourceValues(TestTTResourceReporting.java:296)

Testcase: testResourceValuesOnLinux took 37.521 sec

}}}

TEST-org.apache.hadoop.metrics2.impl.TestGangliaMetrics
{{{
Testcase: testGangliaMetrics2 took 0.128 sec
FAILED
Missing metrics: test.s1rec.c1
junit.framework.AssertionFailedError: Missing metrics: test.s1rec.c1
at
org.apache.hadoop.metrics2.impl.TestGangliaMetrics.checkMetrics(TestGangliaMetrics.java:98)
at
org.apache.hadoop.metrics2.impl.TestGangliaMetrics.testGangliaMetrics2(TestGangliaMetrics.java:77)

}}}

TEST-org.apache.hadoop.metrics2.impl.TestSinkQueue.txt:
{{{
Testcase: testConcurrentConsumers took 0.003 sec
FAILED
should've thrown
junit.framework.AssertionFailedError: should've thrown
at
org.apache.hadoop.metrics2.impl.TestSinkQueue.shouldThrowCME(TestSinkQueue.java:229)
at
org.apache.hadoop.metrics2.impl.TestSinkQueue.testConcurrentConsumers(TestSinkQueue.java:195)
}}}

TEST-org.apache.hadoop.tools.rumen.TestRumenJobTraces.txt
{{{
Can not create a Path from a null string
java.lang.IllegalArgumentException: Can not create a Path from a null string
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:78)
at org.apache.hadoop.fs.Path.(Path.java:90)
at
org.apache.hadoop.tools.rumen.TestRumenJobTraces.testCurrentJHParser(TestRumenJobTraces.java:405)

}}}

Could anyone help with this problem? thank you for your time and helps.

-Grace


[jira] [Created] (HADOOP-8059) Add javadoc to InterfaceAudience and InterfaceStability

2012-02-11 Thread Suresh Srinivas (Created) (JIRA)
Add javadoc to InterfaceAudience and InterfaceStability
---

 Key: HADOOP-8059
 URL: https://issues.apache.org/jira/browse/HADOOP-8059
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.24.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas


InterfaceAudience and InterfaceStability javadoc is incomplete. The details 
from HADOOP-5073.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hadoop Pipes WordCount Compilation Issues...

2012-02-11 Thread jem85

Hi, I am trying to execute Hadoop's WordCount using Pipes but have an issue
compiling it.

I get the following error:
Undefined symbols:
  "HadoopUtils::toString(int)", referenced from:
  WordCountReducer::reduce(HadoopPipes::ReduceContext&) in
ccYJiEZw.o
  WordCountMapper::map(HadoopPipes::MapContext&) in ccYJiEZw.o
  "HadoopUtils::splitString(std::basic_string,
std::allocator > const&, char const*)", referenced from:
  WordCountMapper::map(HadoopPipes::MapContext&) in ccYJiEZw.o
  "HadoopUtils::toInt(std::basic_string,
std::allocator > const&)", referenced from:
  WordCountReducer::reduce(HadoopPipes::ReduceContext&) in
ccYJiEZw.o
  "HadoopPipes::runTask(HadoopPipes::Factory const&)", referenced from:
  _main in ccYJiEZw.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
make: *** [WordCountC] Error 1

Any help would be greatly appreciated!
-- 
View this message in context: 
http://old.nabble.com/Hadoop-Pipes-WordCount-Compilation-Issues...-tp33307961p33307961.html
Sent from the Hadoop core-dev mailing list archive at Nabble.com.



[jira] [Created] (HADOOP-8060) Add a capability to use of consistent checksums for append and copy

2012-02-11 Thread Kihwal Lee (Created) (JIRA)
Add a capability to use of consistent checksums for append and copy
---

 Key: HADOOP-8060
 URL: https://issues.apache.org/jira/browse/HADOOP-8060
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, util
Affects Versions: 0.23.0, 0.24.0, 0.23.1
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.24.0, 0.23.2


After the improved CRC32C checksum feature became default, some of use cases 
involving data movement are no longer supported.  For example, when running 
DistCp to copy from a file stored with the CRC32 checksum to a new cluster with 
the CRC32C set to default checksum, the final data integrity check fails 
because of mismatch in checksums.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8061) Back port trunk metrics2 changes to 1.x branch

2012-02-11 Thread Luke Lu (Created) (JIRA)
Back port trunk metrics2 changes to 1.x branch
--

 Key: HADOOP-8061
 URL: https://issues.apache.org/jira/browse/HADOOP-8061
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 1.0.0
Reporter: Luke Lu
Assignee: Luke Lu


For hysterical raisins, metrics2 code in hadoop 1.x branch and trunk diverged. 
It looks like HBase needs to support both 1.x and 0.23+ for a while, hence the 
more urgent need to clean up the situation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira