Sorry, resending on hdfs-dev; apparently I'm not on -user.

Begin forwarded message:

> From: Brian Bockelman <bbock...@cse.unl.edu>
> Date: March 1, 2011 6:24:28 PM CST
> Cc: hdfs-user <hdfs-u...@hadoop.apache.org>
> Subject: Re: problems with fuse-dfs
> 
> 
> Side note: Do not cross post to multiple lists.  It annoys folks.
> 
> On Mar 1, 2011, at 11:50 AM, Aastha Mehta wrote:
> 
>> Hello,
>> 
>> I am facing problems in running fuse-dfs over hdfs. I came across this
>> thread while searching for my problem:
>> http://www.mail-archive.com/hdfs-user@hadoop.apache.org/msg00341.html
>> OR
>> http://search-hadoop.com/m/T1Bjv17q0eF1&subj=Re+Fuse+DFS
>> 
>> and it exactly mentions some of the symptoms I am looking at.
>> 
>> To quote Eli,
>> "fuse_impls_getattr.c connects via hdfsConnectAsUser so you should see a log
>> (unless its returning from a case that doesn't print an error). Next
>> step is to determine that you're actually reaching the code you modified by
>> adding a syslog to the top of the function (need to make sure you're
>> actually loading the libhdfs you've built vs an older one or another one
>> installed on your system), and then determine which error case in that
>> function you're seeing. It's strange that -d would cause that path to
>> change."
>> 
>> I get the error from fuse_impls_getattr.c that could not connect to my host
>> and port. I checked the syslogs and only this error is printed there. No
>> error is printed from hdfsConnectAsUser(). I even wrote down syslog
>> functions in hdfsConnectAsUser, but it is not printed. I used the following
>> commands to compile libhdfs and then fuse-dfs from $HADOOP_HOME:
>> 
>> $ export
>> LD_LIBRARY_PATH=$HADOOP_HOME/build/libhdfs:/usr/lib/jvm/java-1.5.0-sun:
>> /usr/lib/jvm/java-1.5.0-sun-1.5.0.19/include:/usr/lib/jvm/java-6-sun-1.6.0.22/jre/lib/i386/client
>> $ ant compile-c++-libhdfs -Dlibhdfs=1
>> $ ant compile-contrib -Dlibhdfs=1 -Dfusedfs=1
>> -Djava5.home=/usr/lib/jvm/java-1.5.0-sun-1.5.0.19
>> $ mkdir /media/myDrive/newhdfs
>> $ cd $HADOOP_HOME/build/contrib/fuse-dfs/
>> $ sudo sh fuse_dfs_wrapper.sh dfs://aastha-desktop:9000
>> /media/myDrive/newhdfs
>> 
>> And when i do:
>> $ sudo cat /var/log/syslog
>> 
>> at the end, I get
>> aastha-desktop fuse_dfs: ERROR: could not connect to aastha-desktop:9000,
>> fuse_impls_getattr.c:37
>> 
>> EIO value is 5, which I understand implies an Input/output error.
>> 
>> When i try to ls /media/myDrive/newhdfs, i get the input/output error.
>> Also if i try to unmount newhdfs, it says that it is in use and cannot be
>> unmounted.
>> 
>> I even tried -d option, but that just runs into an infinite loop of
>> connection retries and shows some socketconnection exception.
>> 
>> I have been working with fuse-dfs and hdfs for more than a month now. Till
>> two days back I could atleast create fuse-dfs instances and mount them using
>> the wrapper script. I could also create files inside it, modify them, etc.
>> and view the changes being updated in the nodes of the hadoop cluster. Since
>> yesterday, I am trying to create a LUN image inside fuse-dfs (using dd
>> if=/dev/zero of=... command), and check whether it is created properly on
>> the cluster or not. Yesterday, a normal file system was working and when I
>> tried to create a LUN, my syslog showed that an error was coming from
>> fuse_impls_open.c. But now a normal file system instance is also not getting
>> created and mounted properly.
>> 
>> Following are my questions based on this observation:
>> 1. Please tell me, why is the fuse_impls_getattr.c error is coming now and
>> how can it be resolved.
> 
> Did you try adding "-d" to the options to run in debug mode?
> 
> What is the resulting options for "fuse_dfs" if you do "ps faux"?
> 
>> 2. Does ant compile-c++-libhdfs compile libhdfs? Will the changes that I
>> make in source files in libhdfs be updated by the command? Since I can't see
>> any syslog statements getting printed from hdfsConnectAsUser(), I am not
>> sure if libhdfs has been recompiled. Also I cannot see any object files
>> created inside libhdfs, so greater is my doubt.
> 
> 
> Yes.  One thing to check is to perform "lsof" on the running process and 
> verify it is pulling the libhdfs.so you want (and to do an "ls -l" to check 
> the timestamp).
> 
> If you are missing debug statements, then you are likely not running the 
> version of code you think you are.
> 
>> 3. How does libhdfs.so and libhdfs.so.0 come into picture in all this? We
>> have provided links to them in build/libhdfs; how do they work?
>> 
> 
> These are just standard results from libtool.  Linux will link against 
> libhdfs.so, but libtool typically makes that libhdfs.so -> libhdfs.so.0 -> 
> libhdfs.so.0.0.
> 
> Brian
> 
>> Any help with these issues would be highly appreciated.
>> 
>> Thanks,
>> Aastha.
>> 
>> 
>> -- 
>> Aastha Mehta
>> Intern, NetApp, Bangalore
>> 4th year undergraduate, BITS Pilani
>> E-mail: aasth...@gmail.com
> 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to