[jira] [Created] (HADOOP-8009) Create hadoop-client and hadoop-test artifacts for downstream projects
Create hadoop-client and hadoop-test artifacts for downstream projects --- Key: HADOOP-8009 URL: https://issues.apache.org/jira/browse/HADOOP-8009 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 1.0.0, 0.23.0, 0.22.0, 0.24.0, 0.23.1 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Priority: Critical Using Hadoop from projects like Pig/Hive/Sqoop/Flume/Oozie or any in-house system that interacts with Hadoop is quite challenging for the following reasons: * *Different versions of Hadoop produce different artifacts:* Before Hadoop 0.23 there was a single artifact hadoop-core, starting with Hadoop 0.23 there are several (common, hdfs, mapred*, yarn*) * *There are no 'client' artifacts:* Current artifacts include all JARs needed to run the services, thus bringing into clients several JARs that are not used for job submission/monitoring (servlet, jsp, tomcat, jersey, etc.) * *Doing testing on the client side is also quite challenging as more artifacts have to be included than the dependencies define:* for example, the history-server artifact has to be explicitly included. If using Hadoop 1 artifacts, jersey-server has to be explicitly included. * *3rd party dependencies change in Hadoop from version to version:* This makes things complicated for projects that have to deal with multiple versions of Hadoop as their exclusions list become a huge mix & match of artifacts from different Hadoop versions and it may be break things when a particular version of Hadoop requires a dependency that other version of Hadoop does not require. Because of this it would be quite convenient to have the following 'aggregator' artifacts: * *org.apache.hadoop:hadoop-client* : it includes all required JARs to use Hadoop client APIs (excluding all JARs that are not needed for it) * *org.apache.hadoop:hadoop-test* : it includes all required JARs to run Hadoop Mini Clusters These aggregator artifacts would be created for current branches under development (trunk, 0.22, 0.23, 1.0) and for released versions that are still in use. For branches under development, these artifacts would be generated as part of the build. For released versions we would have a a special branch used only as vehicle for publishing the corresponding 'aggregator' artifacts. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: MPI: Java/JNI help
Managed to get this to work - had to adjust our configure a little (one setting artificially introduced a different path). Thanks! On Jan 30, 2012, at 5:13 PM, Kihwal Lee wrote: > It doesn't have to be static. > Do architectures match between the node manager jvm and the library? > If one is 32 bit and the other is 64, it won't work. > > Kihwal > > On 1/30/12 5:58 PM, "Ralph Castain" wrote: > > Hi folks > > As per earlier emails, I'm just about ready to release the Java MPI bindings. > I have one remaining issue and would appreciate some help. > > We typically build OpenMPI dynamically. For the Java bindings, this means > that the JNI code underlying the Java binding must dynamically load OMPI > plug-ins. Everything works fine on Mac. However, on Linux, I am getting > dynamic library load errors. > > I have tried setting -Djava.library.path and LD_LIBRARY_PATH to the correct > locations. In both cases, I get errors from the JNI code indicating that it > was unable to open the specified dynamic library. > > I have heard from one person that JNI may need to be built statically, and I > suppose it is possible that Apple's customized Java implementation > specifically resolved that problem. However, all the online documentation I > can find indicates that Java on Linux should also be able to load dynamic > libraries - but JNI is not specifically addressed. > > Can any of you Java experts provide advice on this behavior? I'd like to get > these bindings released! > > Thanks > Ralph > >
Re: MPI: Java/JNI help
I was able to dig further into this, and we believe we finally tracked this down to root cause. It appears that java loads things into a private, as opposed to global, namespace. Thus, the Java MPI bindings load the initial libmpi just fine. However, when libmpi then attempts to load the individual plug-ins beneath it, the load fails due to "unfound" symbols. Our plug-ins are implemented as individual dll's, and reference symbols from within the larger libmpi above them. In order to find those symbols, the libraries must be in the global namespace. We have a workaround - namely, to disable dlopen so all the plug-ins get pulled up into libmpi. However, this eliminates the ability for a vendor to distribute a binary, proprietary plug-in that we "absorb" during dlopen. For the moment, this isn't a big deal, but it could be an issue down the line. We have some ideas on how to resolve it internally, but it would take a fair amount of work, and have some side-effects. Does anyone know if it is possible to convince java to use the global namespace? Or can you point me to someone/someplace where I should explore the question? Thanks Ralph On Jan 30, 2012, at 5:13 PM, Kihwal Lee wrote: > It doesn't have to be static. > Do architectures match between the node manager jvm and the library? > If one is 32 bit and the other is 64, it won't work. > > Kihwal > > On 1/30/12 5:58 PM, "Ralph Castain" wrote: > > Hi folks > > As per earlier emails, I'm just about ready to release the Java MPI bindings. > I have one remaining issue and would appreciate some help. > > We typically build OpenMPI dynamically. For the Java bindings, this means > that the JNI code underlying the Java binding must dynamically load OMPI > plug-ins. Everything works fine on Mac. However, on Linux, I am getting > dynamic library load errors. > > I have tried setting -Djava.library.path and LD_LIBRARY_PATH to the correct > locations. In both cases, I get errors from the JNI code indicating that it > was unable to open the specified dynamic library. > > I have heard from one person that JNI may need to be built statically, and I > suppose it is possible that Apple's customized Java implementation > specifically resolved that problem. However, all the online documentation I > can find indicates that Java on Linux should also be able to load dynamic > libraries - but JNI is not specifically addressed. > > Can any of you Java experts provide advice on this behavior? I'd like to get > these bindings released! > > Thanks > Ralph > >
Re: MPI: Java/JNI help
There might be other tricks you can play with CL, but here is my idea: You could have the initial jni native lib to become a sort of wrapper to dlopen() the real thing (the one plug-ins depend on) with RTLD_GLOBAL, so that the fact that the jni library is loaded in a specific name space does not matter. Kihwal On 1/31/12 4:34 PM, "Ralph Castain" wrote: I was able to dig further into this, and we believe we finally tracked this down to root cause. It appears that java loads things into a private, as opposed to global, namespace. Thus, the Java MPI bindings load the initial libmpi just fine. However, when libmpi then attempts to load the individual plug-ins beneath it, the load fails due to "unfound" symbols. Our plug-ins are implemented as individual dll's, and reference symbols from within the larger libmpi above them. In order to find those symbols, the libraries must be in the global namespace. We have a workaround - namely, to disable dlopen so all the plug-ins get pulled up into libmpi. However, this eliminates the ability for a vendor to distribute a binary, proprietary plug-in that we "absorb" during dlopen. For the moment, this isn't a big deal, but it could be an issue down the line. We have some ideas on how to resolve it internally, but it would take a fair amount of work, and have some side-effects. Does anyone know if it is possible to convince java to use the global namespace? Or can you point me to someone/someplace where I should explore the question? Thanks Ralph On Jan 30, 2012, at 5:13 PM, Kihwal Lee wrote: > It doesn't have to be static. > Do architectures match between the node manager jvm and the library? > If one is 32 bit and the other is 64, it won't work. > > Kihwal > > On 1/30/12 5:58 PM, "Ralph Castain" wrote: > > Hi folks > > As per earlier emails, I'm just about ready to release the Java MPI bindings. > I have one remaining issue and would appreciate some help. > > We typically build OpenMPI dynamically. For the Java bindings, this means > that the JNI code underlying the Java binding must dynamically load OMPI > plug-ins. Everything works fine on Mac. However, on Linux, I am getting > dynamic library load errors. > > I have tried setting -Djava.library.path and LD_LIBRARY_PATH to the correct > locations. In both cases, I get errors from the JNI code indicating that it > was unable to open the specified dynamic library. > > I have heard from one person that JNI may need to be built statically, and I > suppose it is possible that Apple's customized Java implementation > specifically resolved that problem. However, all the online documentation I > can find indicates that Java on Linux should also be able to load dynamic > libraries - but JNI is not specifically addressed. > > Can any of you Java experts provide advice on this behavior? I'd like to get > these bindings released! > > Thanks > Ralph > >
[jira] [Created] (HADOOP-8010) hadoop-config.sh spews error message when HADOOP_HOME_WARN_SUPPRESS is set to true and HADOOP_HOME is present
hadoop-config.sh spews error message when HADOOP_HOME_WARN_SUPPRESS is set to true and HADOOP_HOME is present - Key: HADOOP-8010 URL: https://issues.apache.org/jira/browse/HADOOP-8010 Project: Hadoop Common Issue Type: Bug Components: scripts Affects Versions: 1.0.0 Reporter: Roman Shaposhnik Assignee: Roman Shaposhnik Priority: Minor Fix For: 1.0.1 Running hadoop daemon commands when HADOOP_HOME_WARN_SUPPRESS is set to true and HADOOP_HOME is present produces: {noformat} [: 76: true: unexpected operator {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8011) How to use distcp command betwen 2 cluster that different version
How to use distcp command betwen 2 cluster that different version - Key: HADOOP-8011 URL: https://issues.apache.org/jira/browse/HADOOP-8011 Project: Hadoop Common Issue Type: New Feature Reporter: cldoltd I have tow cluster 1.0 and 0.2 how to use distcp to copy betwen 2 cluster this is error: Copy failed: java.io.IOException: Call to cluster1 failed on local exception: java.io.EOFException at org.apache.hadoop.ipc.Client.wrapException(Client.java:1103) at org.apache.hadoop.ipc.Client.call(Client.java:1071) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) at $Proxy1.getProtocolVersion(Unknown Source) at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396) at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379) at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:238) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:203) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187) at org.apache.hadoop.tools.DistCp.checkSrcPath(DistCp.java:635) at org.apache.hadoop.tools.DistCp.copy(DistCp.java:656) at org.apache.hadoop.tools.DistCp.run(DistCp.java:881) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at org.apache.hadoop.tools.DistCp.main(DistCp.java:908) Caused by: java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:375) at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:800) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:745) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Issue on building hadoop native
Thanks, Ronald. After tried couple of the times, the build is working now. Hai From: Ronald Petty To: Hai Huang Cc: "common-dev@hadoop.apache.org" Sent: Monday, January 30, 2012 12:59:57 AM Subject: Re: Issue on building hadoop native Hai, I don't know the 'official' reason, but it is bad practice to use user ids under 1000. I presume you are using root or some typical user account to build this. The fix for me was to create a new user with an id that is greater than 1000. Here is an example useradd --id 1001 hadoopuser I hope that works! Kindest regards. Ron On Sun, Jan 29, 2012 at 9:16 PM, Hai Huang wrote: Hi Ronald, > > >I just tried to use > >= > >mvn -Pnative compile > > >and it passed. > >So I used command "mvn -e package -Pdist,native,docs -DskipTests -Dtargc" >again, checked the log and found one issue message below > >= > >[INFO] --- make-maven-plugin:1.0-beta-1:test (test) @ >hadoop-yarn-server-nodemanager --- >[INFO] make test-container-executor >[INFO] make[1]: Entering directory >`/home/hhf/hadoop-common/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/container-executor' >[INFO] make[1]: `test-container-executor' is up to date. >[INFO] make[1]: Leaving directory >`/home/hhf/hadoop-common/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/container-executor' >[INFO] make check-TESTS >[INFO] make[1]: Entering directory >`/home/hhf/hadoop-common/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/container-executor' >[INFO] Requested user hhuang has id 500, which is below the minimum allowed >1000 >[INFO] FAIL: test-container-executor >[INFO] >[INFO] 1 of 1 test failed >[INFO] Please report to mapreduce-...@hadoop.apache.org >[INFO] > >It seemed the issue was from "test-container-executor" , and the issue message >is "... has id 500 , which is below the minimum allowed 1000 ..." ? > > >Best, > >Hai > > > >- Original Message - > >From: Ronald Petty >To: common-dev@hadoop.apache.org; Hai Huang >Cc: > >Sent: Sunday, January 29, 2012 11:42:30 PM >Subject: Re: Issue on building hadoop native > >Hai, > >Can you rerun with "-e -X" and the native setting as well? Also, can you >send pastebin the entire build output and send the link to it? > >Kindest regards. > >Ron > >On Sun, Jan 29, 2012 at 3:44 PM, Hai Huang wrote: > >> Hi, >> >> I got the hadoop hadoop source via Git and buit it using the command >> >> = >> mvn package -Pdist,native,docs -DskipTests -Dtar >> >> = >> >> >> Does anyone the below compilation issue I got ? >> >> == >> >> [ERROR] Failed to execute goal >> org.codehaus.mojo:make-maven-plugin:1.0-beta-1:test (test) on project >> hadoop-yarn-server-nodemanager: make returned an exit value != 0. Aborting >> build; see command output above for more information. -> [Help 1] >> [ERROR] >> [ERROR] To see the full stack trace of the errors, re-run Maven with the >> -e switch. >> [ERROR] Re-run Maven using the -X switch to enable full debug logging. >> [ERROR] >> [ERROR] For more information about the errors and possible solutions, >> please read the following articles: >> [ERROR] [Help 1] >> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException >> [ERROR] >> [ERROR] After correcting the problems, you can resume the build with the >> command >> [ERROR] mvn -rf :hadoop-yarn-server-nodemanager >> >> = >> >> I am using the CentOs 6.2 and gcc 4.4.6. It looks like that the issue is >> due to build native code. If I just used the command >> >> >> >> mvn package -Pdist -DskipTests -Dtar >> >> >> >> The build was OK. >> >> >> Best, >> >> Hai >> >> > >