Thanks, Ronald.
After tried couple of the times, the build is working now.
Hai
From: Ronald Petty
To: Hai Huang
Cc: "common-dev@hadoop.apache.org"
Sent: Monday, January 30, 2012 12:59:57 AM
Subject: Re: Issue on building hadoop native
Hai,
I don't know
How to use distcp command betwen 2 cluster that different version
-
Key: HADOOP-8011
URL: https://issues.apache.org/jira/browse/HADOOP-8011
Project: Hadoop Common
Issue Type: Ne
hadoop-config.sh spews error message when HADOOP_HOME_WARN_SUPPRESS is set to
true and HADOOP_HOME is present
-
Key: HADOOP-8010
URL: https://issues.apache
There might be other tricks you can play with CL, but here is my idea: You
could have the initial jni native lib to become a sort of wrapper to dlopen()
the real thing (the one plug-ins depend on) with RTLD_GLOBAL, so that the fact
that the jni library is loaded in a specific name space does not
I was able to dig further into this, and we believe we finally tracked this
down to root cause. It appears that java loads things into a private, as
opposed to global, namespace. Thus, the Java MPI bindings load the initial
libmpi just fine.
However, when libmpi then attempts to load the indivi
Managed to get this to work - had to adjust our configure a little (one setting
artificially introduced a different path). Thanks!
On Jan 30, 2012, at 5:13 PM, Kihwal Lee wrote:
> It doesn't have to be static.
> Do architectures match between the node manager jvm and the library?
> If one is 32
Create hadoop-client and hadoop-test artifacts for downstream projects
---
Key: HADOOP-8009
URL: https://issues.apache.org/jira/browse/HADOOP-8009
Project: Hadoop Common
Is