Loading hadoop.dll in tests is supposed to work via a common shared
maven-surefire-plugin configuration that sets the PATH environment variable
to include the build location of the dll:

https://github.com/apache/hadoop-common/blob/trunk/hadoop-project/pom.xml#L894

(On Windows, the shared library path is controlled with PATH instead of
LD_LIBRARY_PATH on Linux.)

This configuration has been working fine in all of the dev environments
I've seen, but I'm wondering if something different is happening in your
environment.  Does your hadoop.dll show up in
hadoop-common-project/hadoop-common/target/bin?  Is there anything else
that looks unique in your environment?

Also, another potential gotcha is the Windows max path length limitation of
260 characters.  Deeply nested project structures like Hadoop can cause
very long paths for the built artifacts, and you might not be able to load
the files if the full path exceeds 260 characters.  The workaround for now
is to keep the codebase in a very short root folder.  (I use C:\hdc .)

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Mon, Jul 15, 2013 at 1:07 PM, Chuan Liu <chuan...@microsoft.com> wrote:

> Hi Uma,
>
> I suggest you do a 'mvn install -DskipTests' before running 'mvn
> eclipse:eclipse'.
>
> Thanks,
> Chuan
>
> -----Original Message-----
> From: Uma Maheswara Rao G [mailto:hadoop....@gmail.com]
> Sent: Friday, July 12, 2013 7:42 PM
> To: common-...@hadoop.apache.org
> Cc: hdfs-dev@hadoop.apache.org
> Subject: Re: mvn eclipse:eclipse failure on windows
>
> HI Chris,
>   eclipse:eclipse works but still I am seeing UnsatisfiesLinkError.
> Explicitly I pointed java.library.path to where hadoop.dll geneated. This
> dll generated with my clean install command only.   My pc is 64 but and
> also set Platform=x64 while building. But does not help.
>
> Regards,
> Uma
>
>
>
>
>
>
> On Fri, Jul 12, 2013 at 11:45 PM, Chris Nauroth <cnaur...@hortonworks.com
> >wrote:
>
> > Hi Uma,
> >
> > I just tried getting a fresh copy of trunk and running "mvn clean
> > install -DskipTests" followed by "mvn eclipse:eclipse -DskipTests".
> > Everything worked fine in my environment.  Are you still seeing the
> problem?
> >
> > The UnsatisfiedLinkError seems to indicate that your build couldn't
> > access hadoop.dll for JNI method implementations.  hadoop.dll gets
> > built as part of the hadoop-common sub-module.  Is it possible that
> > you didn't have a complete package build for that sub-module before
> > you started running the HDFS test?
> >
> > Chris Nauroth
> > Hortonworks
> > http://hortonworks.com/
> >
> >
> >
> > On Sun, Jul 7, 2013 at 9:08 AM, sure bhands <sure.bha...@gmail.com>
> wrote:
> >
> > > I would try cleaning hadoop-maven-plugin directory from maven
> > > repository
> > to
> > > rule out the stale version and then mv install followed by mvn
> > > eclipse:eclipse before digging in to it further.
> > >
> > > Thanks,
> > > Surendra
> > >
> > >
> > > On Sun, Jul 7, 2013 at 8:28 AM, Uma Maheswara Rao G <
> > hadoop....@gmail.com
> > > >wrote:
> > >
> > > > Hi,
> > > >
> > > > I am seeing this failure on windows while executing mvn
> > > > eclipse:eclipse command on trunk.
> > > >
> > > > See the following trace:
> > > >
> > > > [INFO]
> > > >
> > ----------------------------------------------------------------------
> > --
> > > > [ERROR] Failed to execute goal
> > > > org.apache.maven.plugins:maven-eclipse-plugin:2.8
> > > > :eclipse (default-cli) on project hadoop-common: Request to merge
> > > > when 'filterin g' is not identical. Original=resource
> > > > src/main/resources:
> > > > output=target/classes
> > > > , include=[], exclude=[common-version-info.properties|**/*.java],
> > > > test=false, fi
> > > > ltering=false, merging with=resource src/main/resources:
> > > > output=target/classes,
> > > > include=[common-version-info.properties], exclude=[**/*.java],
> > > test=false,
> > > > filte
> > > > ring=true -> [Help 1]
> > > > [ERROR]
> > > > [ERROR] To see the full stack trace of the errors, re-run Maven
> > > > with
> > the
> > > -e
> > > > swit
> > > > ch.
> > > > [ERROR] Re-run Maven using the -X switch to enable full debug
> logging.
> > > > [ERROR]
> > > > [ERROR] For more information about the errors and possible
> > > > solutions, please rea d the following articles:
> > > > [ERROR] [Help 1]
> > > > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionE
> > > > xception
> > > > [ERROR]
> > > > [ERROR] After correcting the problems, you can resume the build
> > > > with
> > the
> > > > command
> > > >
> > > > [ERROR]   mvn <goals> -rf :hadoop-common
> > > > E:\Hadoop-Trunk>
> > > >
> > > > any idea for resolving it.
> > > >
> > > > With 'org.apache.maven.plugins:maven-eclipse-plugin:2.6:eclipse'
> > > > seems
> > to
> > > > be no failures but  I am seeing following exception while running
> > tests.
> > > > java.lang.UnsatisfiedLinkError:
> > > >
> > > >
> > >
> > org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/Stri
> > ng;I)Z
> > > >     at
> > > > org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native
> > > > Method)
> > > >     at
> > > >
> > org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:42
> > 3)
> > > >     at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:952)
> > > >     at
> > > >
> > > >
> > >
> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeS
> > torage(Storage.java:451)
> > > >     at
> > > >
> > > >
> > >
> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSIm
> > age.java:282)
> > > >     at
> > > >
> > > >
> > >
> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(F
> > SImage.java:200)
> > > >     at
> > > >
> > > >
> > >
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSName
> > system.java:696)
> > > >     at
> > > >
> > > >
> > >
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNam
> > esystem.java:530)
> > > >     at
> > > >
> > > >
> > >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNod
> > e.java:401)
> > > >     at
> > > >
> > > >
> > >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.ja
> > va:435)
> > > >     at
> > > >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:6
> > 07)
> > > >     at
> > > >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:5
> > 92)
> > > >     at
> > > >
> > > >
> > >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNod
> > e.java:1172)
> > > >     at
> > > >
> > > >
> > >
> > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.ja
> > va:895)
> > > >     at
> > > >
> > > >
> > >
> > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDF
> > SCluster.java:786)
> > > >     at
> > > >
> > > >
> > >
> > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluste
> > r.java:644)
> > > >     at
> > > > org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:334)
> > > >     at
> > > >
> > > >
> > >
> > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.jav
> > a:316)
> > > >     at
> > > >
> > > >
> > >
> > org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode.setupCluster(
> > TestHASafeMode.java:87)
> > > >
> > > > Not sure what I missed here. Any idea what could be wrong here?
> > > >
> > > > Regards,
> > > > Uma
> > > >
> > >
> >
>
>

Reply via email to