[jira] [Created] (HADOOP-11244) The HCFS contract test testRenameFileBeingAppended doesn't do a rename

2014-10-29 Thread Noah Watkins (JIRA)
Noah Watkins created HADOOP-11244: - Summary: The HCFS contract test testRenameFileBeingAppended doesn't do a rename Key: HADOOP-11244 URL: https://issues.apache.org/jira/browse/HADOOP-11244 Pr

Re: MPI Java bindings now available

2012-02-22 Thread Noah Watkins
watkins nwatkins 4096 2012-02-22 09:49 pkgconfig On Feb 22, 2012, at 10:25 AM, Ralph Castain wrote: > Hi Noah > > Your LD_LIBRARY_PATH has to include the path to the OMPI libraries so we can > find libmpi. Did you include your $prefix/lib[64] in it? > > On Feb 22, 2012

Re: MPI Java bindings now available

2012-02-22 Thread Noah Watkins
Just gave the nightly release a try and I'm getting an error: nwatkins@kyoto:~/projects/openmpi_java/openmpi-1.7a1r25994/examples$ mpirun -np 1 java Hello JAVA BINDINGS FAILED TO LOAD REQUIRED LIBRARIES My setup is on the latest Ubuntu with: 1. Built with contrib/platform/hadoop/linux 2. Compil

Re: Trunk build failure

2012-02-16 Thread Noah Watkins
Could you try mvn -X compile? That should show the exact commands that are being executed which seem to be coming from hadoop-project-dist/pom.xml. On Lion the following worked me: git clone git://git.apache.org/hadoop-common.git cd hadoop-common mvn compile I had a very similar error: [ERR

Re: MPI: Java/JNI help

2012-01-30 Thread Noah Watkins
On Jan 30, 2012, at 3:58 PM, Ralph Castain wrote: > I have tried setting -Djava.library.path and LD_LIBRARY_PATH to the correct > locations. In both cases, I get errors from the JNI code indicating that it > was unable to open the specified dynamic library. Do the paths to OpenMPI libraries, a

[jira] [Resolved] (HADOOP-6779) Support for Ceph kernel client

2011-11-30 Thread Noah Watkins (Resolved) (JIRA)
[ https://issues.apache.org/jira/browse/HADOOP-6779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noah Watkins resolved HADOOP-6779. -- Resolution: Won't Fix Resolved as "Won't Fix". If there are any users

Reporting BlockLocation hosts using IPs

2011-11-08 Thread Noah Watkins
Hi, Is it possible to use IP addresses for the host list of BlockLocation? The Ceph file system uses IPs to describe block locations, and Hadoop is not achieving any data locality. I used a quick reverse DNS hack to verify that the IPs were causing the problem, but this isn't robust, and the Ce

Re: FileSystem contract of listStatus

2011-11-02 Thread Noah Watkins
- Original Message - > From: "Eli Collins" > > RawLocalFileSystem uses Java's File#list which has "no guarantee that > the name strings in the resulting array will appear in any specific > order; they are not, in particular, guaranteed to appear in > alphabetical order.", however the FSCon

FileSystem contract of listStatus

2011-11-02 Thread Noah Watkins
I have a question about the FileSystem contract in 0.20. In FileSystemContractBaseBaseTest:testFileStatus() there are several files created, and afterwards the test confirms that they are present. Here is the relevant code: FileStatus[] paths = fs.listStatus(path("/test")); paths = fs.li

Re: Trouble resolving external jar dependency

2011-08-04 Thread Noah Watkins
pattern. > > Cheers, > > Joep > ________ > From: Noah Watkins [jayh...@soe.ucsc.edu] > Sent: Sunday, July 31, 2011 5:02 PM > To: common-dev@hadoop.apache.org > Subject: Trouble resolving external jar dependency > > I'm experimen

Trouble resolving external jar dependency

2011-07-31 Thread Noah Watkins
I'm experimenting with a new file system that depends on an external jar that is not available right now via maven. I added the jar to the lib/ directory and hadoop-common builds fine. However, when running 'ant mvn-install' I get the following error. It seems as though a reference to the extern

Re: RecordReader

2011-03-23 Thread Noah Watkins
On Mar 23, 2011, at 1:51 PM, Keren Ouaknine wrote: > Hello, > > I have a specific RecordReder, named XRR. I would like iterate on split and > create records. Thus, need to position its start attribute. I dont have > enough info from InputSplit. Interface for inputsplit had only getLength and > ge

[jira] Created: (HADOOP-7099) Make RawLocalFileSystem more friendly to sub-classing

2011-01-11 Thread Noah Watkins (JIRA)
Components: fs Affects Versions: 0.23.0 Reporter: Noah Watkins Fix For: 0.23.0 This patch does 2 things that makes sub-classing RawLocalFileSystem easier. First, it adds a constructor that allows a sub-class to avoid calling getInitialWorkingDirectory(). This is

[jira] Created: (HADOOP-7097) java.library.path missing basedir

2011-01-10 Thread Noah Watkins (JIRA)
: 0.22.0 Environment: Ubuntu 10.10 Reporter: Noah Watkins Priority: Blocker Fix For: 0.22.0 My Hadoop installation is having trouble loading the native code library. It appears from the log below that java.library.path is missing the basedir in its path

source versioning question

2011-01-10 Thread Noah Watkins
What is the relation between the current trunk and branch-0.22? Is trunk the current dev for 0.23 or 0.22? Thanks, Noah

native code lib: java.library.path missing basedir

2011-01-10 Thread Noah Watkins
Hi, My Hadoop installation is having trouble loading the native code library. It appears from the log below that java.library.path is missing the basedir in its path. The libraries are built, and present in the directory shown below (relative to hadoop-common directory). Instead of seeing:

Re: *FileSystem and *Fs APIs

2011-01-02 Thread Noah Watkins
> Most of the Hadoop code base (MapReduce, FsShell etc) has not yet > switched to the new API yet. I have a FS implementation that is nearly complete, and designed against the 0.21 code base, using the old FileSystem interface. Do you have any recommendations on how to move to the new API in trunk

[jira] Created: (HADOOP-7059) Remove "unused" warning in native code

2010-12-07 Thread Noah Watkins (JIRA)
Affects Versions: 0.21.0 Reporter: Noah Watkins The file: src/native/src/org_apache_hadoop.h declares the static function "do_dlsym" in the header as non-inline. Files including the header (e.g. for the THROW macro) receive a "defined but unused" warning duri

Native code and autoconf generated files

2010-12-04 Thread Noah Watkins
Hi, This question pertains to the 0.21 release. When adding code to the Hadoop native code library what is the preferred method for regenerating Makefile.in? I've tried to use autoreconf, but it seems as though my newer version of autotools is causing compilation problems for the other native