ow up on any blocker
> list.
>
> http://s.apache.org/hadoop-blocker-bugs
>
> Arun
>
>
> On Jun 15, 2013, at 4:44 PM, Ralph Castain wrote:
>
>> Not trying to be a pain, but I am trying to get clarification. The protocol
>> buffer support is still
Just curious of your procedures. Given that there is at least one blocker JIRA
out there that has yet to be fully resolved, do you intend to release anyway?
On Jun 15, 2013, at 8:19 AM, Alejandro Abdelnur wrote:
> If the intention is to get the release out in time for the Hadoop Summit we
> ha
Ralph Castain created HADOOP-9606:
-
Summary: Protocol buffer support cannot compile under C
Key: HADOOP-9606
URL: https://issues.apache.org/jira/browse/HADOOP-9606
Project: Hadoop Common
Hi folks
On line 228 of
hadoop-hdfs-project/hadoop-hdfs/src/main/proto/NamenodeProtocol.proto, someone
named a function using a reserved name:
/**
* Request to register a sub-ordinate namenode
*/
rpc register(RegisterRequestProto) returns(RegisterResponseProto);
You cannot name a fu
Hi folks
I'm trying to build the head of branch-2 on a CentOS box and hitting a rash of
errors like the following (all from the protobuf support area):
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile)
on project hadoop-common: Compil
On Apr 9, 2012, at 3:50 PM, Ralph Castain wrote:
>
> On Apr 9, 2012, at 2:45 PM, Kihwal Lee wrote:
>
>> The path, "file:/Users/rhc/yarnrun/13", indicates that your copy operation's
>> destination was the local file system, instead of hdfs.
>
> Yeah,
t; What is the value of "fs.default.name" set to in core-site.xml?
fs.default.name
hdfs://localhost:9000
>
> Kihwal
>
>
> On 4/9/12 3:26 PM, "Ralph Castain" wrote:
>
> Finally managed to chase down the 0.23 API docs and get the FileStatus
>
Solved the "realm" warning courtesy of stackoverflow:
export HADOOP_OPTS="-Djava.security.krb5.realm=OX.AC.UK
-Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk"
solves it. Didn't help resolve the problem, as expected.
On Apr 9, 2012, at 2:26 PM, Ralph Castain wro
at 1:27 PM, Kihwal Lee wrote:
> It looks like the home directory does not exist but the copy went through.
> Can you try to LOG the key fields in destStatus including path? It might be
> ending up in an unexpected place.
>
> Kihwal
>
>
>
> On 4/9/12 12:45 PM, "Ra
s including path? It might be
> ending up in an unexpected place.
>
> Kihwal
>
>
>
> On 4/9/12 12:45 PM, "Ralph Castain" wrote:
>
> Hi Bobby
>
> On Apr 9, 2012, at 11:40 AM, Robert Evans wrote:
>
>> What do you mean by relocated some suppor
ogram work that way. Problem I'm having
is when I move an archive, which is why I was hoping to look at the HDFS end to
see what files are present, and in what locations so I can set the paths
accordingly.
Thanks
Ralph
>
> --Bobby Evans
>
>
> On 4/9/12 11:10 AM, "
Hi folks
I'm trying to develop an AM for the 0.23 branch and running into a problem that
I'm having difficulty debugging. My client relocates some supporting files to
HDFS, creates the application object for the AM, and submits it to the RM.
The file relocation request doesn't generate an error
:
> Is your AM written in Java or C?
>
> On Mar 26, 2012, at 3:55 PM, Ralph Castain wrote:
>
>> Perhaps it would help if I outline the use case. I have a Java client that
>> needs to launch a non-Java application manager. Obviously, the client talks
>> to the RM using t
Evans
>
> On 3/24/12 8:38 PM, "Eli Collins" wrote:
>
> Good idea, no reason we shouldn't, the build probably wasn't updated to
> include the who we added then. File a jira?
>
> On Saturday, March 24, 2012, Ralph Castain wrote:
>> Hi folks
>
Hi folks
I notice that the .proto files are not present in the built tarball. This
presents a problem to those of us working on 3rd party tools that need to talk
to Hadoop tools such as the resource manager. It means that anyone wanting to
build our tools has to install an svn checkout of the c
> libopen-rte.so.0.0.0
> -rwxr-xr-x 1 nwatkins nwatkins 4695012 2012-02-22 09:49 libopen-rte.so.0.0.0
> -rw-r--r-- 1 nwatkins nwatkins27260 2012-02-22 09:49 mpi.jar
> drwxrwxr-x 2 nwatkins nwatkins12288 2012-02-22 09:49 openmpi
> drwxrwxr-x 2 nwatkins nwatkins 4096 2012-02-22
RARY_PATH set to /mpi/java/install/lib
>
> Looks like Java is loading libmpi_java successfully, and something is going
> awry with the library magic in mpi_MPI.c:loadGlobalLibraries()
>
> Thanks,
> Noah
>
> On Feb 21, 2012, at 3:05 PM, Ralph Castain wrote:
>
>> Hi folk
Hi folks
With support from EMC, several of us in the Open MPI community (including LANL,
Cisco, HLRS, Oracle, and IBM) have integrated Java bindings into the Open MPI
code. The new bindings are not part of a formal release as this time (will be
in the upcoming 1.7 series), but can be obtained f
Hi folks
I'm a newbie to the Hadoop code and am trying to build the svn trunk per
instructions on the wiki and the mailing list. I'm hitting a failure and would
appreciate any suggestions:
main:
[exec] tar: Failed to open 'hadoop-project-dist-0.24.0-SNAPSHOT.tar.gz'
It looks like this f
ort of wrapper to dlopen()
> the real thing (the one plug-ins depend on) with RTLD_GLOBAL, so that the
> fact that the jni library is loaded in a specific name space does not matter.
>
> Kihwal
>
> On 1/31/12 4:34 PM, "Ralph Castain" wrote:
>
> I was able to
h
On Jan 30, 2012, at 5:13 PM, Kihwal Lee wrote:
> It doesn't have to be static.
> Do architectures match between the node manager jvm and the library?
> If one is 32 bit and the other is 64, it won't work.
>
> Kihwal
>
> On 1/30/12 5:58 PM, "Ralph Castain"
; If one is 32 bit and the other is 64, it won't work.
>
> Kihwal
>
> On 1/30/12 5:58 PM, "Ralph Castain" wrote:
>
> Hi folks
>
> As per earlier emails, I'm just about ready to release the Java MPI bindings.
> I have one remaining issue an
On Jan 30, 2012, at 5:13 PM, Kihwal Lee wrote:
> It doesn't have to be static.
> Do architectures match between the node manager jvm and the library?
> If one is 32 bit and the other is 64, it won't work.
That's a good question - I'll check...
>
> Kih
Hi folks
As per earlier emails, I'm just about ready to release the Java MPI bindings. I
have one remaining issue and would appreciate some help.
We typically build OpenMPI dynamically. For the Java bindings, this means that
the JNI code underlying the Java binding must dynamically load OMPI p
Hi folks
I have been familiarizing myself with the Hadoop 0.23 code tree, and found
myself wondering if people were aware of the tools commonly used by the HPC
community as I worked my way thru the code. Just in case the community isn't, I
thought it might be worth a very brief summary of the f
of the author, and
> do not necessarily represent the views of any organization, past or
> present, the author might be affiliated with.)
>
>
>
> On 11/21/11 3:54 PM, "Ralph Castain" wrote:
>
>> Hi Milind
>>
>> Glad to hear of the progress - I recal
eup, I
might look at this next.
>
> - milind
>
> ---
> Milind Bhandarkar
> Greenplum Labs, EMC
> (Disclaimer: Opinions expressed in this email are those of the author, and
> do not necessarily represent the views of any organization, past or
> present, the author might b
each socket on every node".
>>
>> I have written the code to implement the above support on a number of
>> systems, and don't foresee major problems doing it for Hadoop (though I
>> would welcome a chance to get a brief walk-thru the code from someone).
>> Please let me know if this would be of interest to the Hadoop community.
>>
>> Thanks
>> Ralph Castain
>>
>>
>>
>
ove to collaborate, should we discuss on that jira?
Sure! I'll poke my nose over there...thanks!
>
> thanks,
> Arun
>
> On Nov 21, 2011, at 3:35 PM, Ralph Castain wrote:
>
>> Hi folks
>>
>> I am a lead developer in the Open MPI community, mostly focu
have written the code to implement the above support on a number of systems,
and don't foresee major problems doing it for Hadoop (though I would welcome a
chance to get a brief walk-thru the code from someone). Please let me know if
this would be of interest to the Hadoop community.
Thanks
Ralph Castain
30 matches
Mail list logo