On Mar 2, 2011, at 12:46 AM, Aastha Mehta wrote:

> Thank you very much for replying. And I am sorry for mailing to users and
> dev together, I was not sure where the question belonged to.
> 
> I did try -d option while running the wrapper script. It runs into an
> infinite loop of connection retries and I can see some socket connection
> exception also thrown up. I have to terminate the process and then at the
> end it shows  "Transport endpoint is not connected"
> 

Can you post the attempts?  Sounds like it may not be configured correctly.

> lsof on /media/myDrive/newhdfs returns a warning as:
> lsof: WARNING: can't stat fuse.fuse_dfs file system on
> /media/myDrive/newhdfs
>       Output information maybe incomplete
> lsof: status error on /media/myDrive/newhdfs: Input/output error
> 
> $lsof -c fuse_dfs
> COMMAND   PID      USER      FD           TYPE        DEVICE   SIZE/OFF
> NODE NAME
> fuse_dfs        2884     root         cwd         unknown
>                          /proc/2884/cwd (readlink: Permission denied)
> fuse_dfs        2884     root         rtd           unknown
>                            /proc/2884/rtd (readlink: Permission denied)
> fuse_dfs        2884     root         txt           unknown
>                            /proc/2884/txt (readlink: Permission denied)
> fuse_dfs        2884     root         NOFD      unknown
>                        /proc/2884/fd (opendir: Permission denied)
> 

This is not useful.  As it says in the output, you don't have permission to 
perform this operation.

> $ps faux (listing only the relevant proceses)
> root       283  0.0  0.2   4104  1192 ?        S    Mar01   0:00 mountall
> --daemon
> hadoop    1263  0.0  0.5  30284  2560 ?        Ssl  Mar01   0:00
> /usr/lib/gvfs//gvfs-fuse-daemon /home/hadoop/.gvfs
> root         2884  0.3  8.3 331128 42756 ?        Ssl  Mar01   3:21
> ./fuse_dfs dfs://aastha-desktop:9000 /media/myDrive/hell
> 

This looks strange.  The relevant line from my running systems looks like this:

root     11767  0.1  4.2 6739536 1054776 ?     Ssl  Feb17  23:54 
/usr/lib/hadoop-0.20/bin/fuse_dfs /mnt/hadoop -o 
rw,server=hadoop-name,port=9000,rdbuffer=32768,allow_other

It could be you are simply invoking fuse_dfs incorrectly.  I've never used 
fuse_dfs_wrapper.sh myself.  I have a script as below (*) and then mount it in 
fstab:

hdfs /mnt/hadoop fuse server=hadoop-name,port=9000,rdbuffer=32768,allow_other 0 0

> Regarding libhdfs, I checked where all it is located in my system. It is
> present only in the hadoop directories:
> /home/hadoop/hadoop/hadoop-0.20.2/src/c++/libhdfs/
> /usr/local/hadoop/hadoop-0.20.2/src/c++/libhdfs/
> 
> Now, I cannot understand why the changes to the libhdfs code are not
> reflected.
> 

Are those libraries on the linker's path?

> Thanks again for your help.
> 
> Regards,
> 
> Aastha.
> 
> 
> On 2 March 2011 05:56, Brian Bockelman <bbock...@cse.unl.edu> wrote:
> 
>> Sorry, resending on hdfs-dev; apparently I'm not on -user.
>> 
>> Begin forwarded message:
>> 
>> *From: *Brian Bockelman <bbock...@cse.unl.edu>
>> *Date: *March 1, 2011 6:24:28 PM CST
>> *Cc: *hdfs-user <hdfs-u...@hadoop.apache.org>
>> *Subject: **Re: problems with fuse-dfs*
>> 
>> 
>> Side note: Do not cross post to multiple lists.  It annoys folks.
>> 
>> 
>> On Mar 1, 2011, at 11:50 AM, Aastha Mehta wrote:
>> 
>> Hello,
>> 
>> 
>> I am facing problems in running fuse-dfs over hdfs. I came across this
>> 
>> thread while searching for my problem:
>> 
>> http://www.mail-archive.com/hdfs-user@hadoop.apache.org/msg00341.html
>> 
>> OR
>> 
>> http://search-hadoop.com/m/T1Bjv17q0eF1&subj=Re+Fuse+DFS
>> 
>> 
>> and it exactly mentions some of the symptoms I am looking at.
>> 
>> 
>> To quote Eli,
>> 
>> "fuse_impls_getattr.c connects via hdfsConnectAsUser so you should see a
>> log
>> 
>> (unless its returning from a case that doesn't print an error). Next
>> 
>> step is to determine that you're actually reaching the code you modified by
>> 
>> adding a syslog to the top of the function (need to make sure you're
>> 
>> actually loading the libhdfs you've built vs an older one or another one
>> 
>> installed on your system), and then determine which error case in that
>> 
>> function you're seeing. It's strange that -d would cause that path to
>> 
>> change."
>> 
>> 
>> I get the error from fuse_impls_getattr.c that could not connect to my host
>> 
>> and port. I checked the syslogs and only this error is printed there. No
>> 
>> error is printed from hdfsConnectAsUser(). I even wrote down syslog
>> 
>> functions in hdfsConnectAsUser, but it is not printed. I used the following
>> 
>> commands to compile libhdfs and then fuse-dfs from $HADOOP_HOME:
>> 
>> 
>> $ export
>> 
>> LD_LIBRARY_PATH=$HADOOP_HOME/build/libhdfs:/usr/lib/jvm/java-1.5.0-sun:
>> 
>> 
>> /usr/lib/jvm/java-1.5.0-sun-1.5.0.19/include:/usr/lib/jvm/java-6-sun-1.6.0.22/jre/lib/i386/client
>> 
>> $ ant compile-c++-libhdfs -Dlibhdfs=1
>> 
>> $ ant compile-contrib -Dlibhdfs=1 -Dfusedfs=1
>> 
>> -Djava5.home=/usr/lib/jvm/java-1.5.0-sun-1.5.0.19
>> 
>> $ mkdir /media/myDrive/newhdfs
>> 
>> $ cd $HADOOP_HOME/build/contrib/fuse-dfs/
>> 
>> $ sudo sh fuse_dfs_wrapper.sh dfs://aastha-desktop:9000
>> 
>> /media/myDrive/newhdfs
>> 
>> 
>> And when i do:
>> 
>> $ sudo cat /var/log/syslog
>> 
>> 
>> at the end, I get
>> 
>> aastha-desktop fuse_dfs: ERROR: could not connect to aastha-desktop:9000,
>> 
>> fuse_impls_getattr.c:37
>> 
>> 
>> EIO value is 5, which I understand implies an Input/output error.
>> 
>> 
>> When i try to ls /media/myDrive/newhdfs, i get the input/output error.
>> 
>> Also if i try to unmount newhdfs, it says that it is in use and cannot be
>> 
>> unmounted.
>> 
>> 
>> I even tried -d option, but that just runs into an infinite loop of
>> 
>> connection retries and shows some socketconnection exception.
>> 
>> 
>> I have been working with fuse-dfs and hdfs for more than a month now. Till
>> 
>> two days back I could atleast create fuse-dfs instances and mount them
>> using
>> 
>> the wrapper script. I could also create files inside it, modify them, etc.
>> 
>> and view the changes being updated in the nodes of the hadoop cluster.
>> Since
>> 
>> yesterday, I am trying to create a LUN image inside fuse-dfs (using dd
>> 
>> if=/dev/zero of=... command), and check whether it is created properly on
>> 
>> the cluster or not. Yesterday, a normal file system was working and when I
>> 
>> tried to create a LUN, my syslog showed that an error was coming from
>> 
>> fuse_impls_open.c. But now a normal file system instance is also not
>> getting
>> 
>> created and mounted properly.
>> 
>> 
>> Following are my questions based on this observation:
>> 
>> 1. Please tell me, why is the fuse_impls_getattr.c error is coming now and
>> 
>> how can it be resolved.
>> 
>> 
>> Did you try adding "-d" to the options to run in debug mode?
>> 
>> What is the resulting options for "fuse_dfs" if you do "ps faux"?
>> 
>> 
>> 2. Does ant compile-c++-libhdfs compile libhdfs? Will the changes that I
>> 
>> make in source files in libhdfs be updated by the command? Since I can't
>> see
>> 
>> any syslog statements getting printed from hdfsConnectAsUser(), I am not
>> 
>> sure if libhdfs has been recompiled. Also I cannot see any object files
>> 
>> created inside libhdfs, so greater is my doubt.
>> 
>> 
>> 
>> Yes.  One thing to check is to perform "lsof" on the running process and
>> verify it is pulling the libhdfs.so you want (and to do an "ls -l" to check
>> the timestamp).
>> 
>> If you are missing debug statements, then you are likely not running the
>> version of code you think you are.
>> 
>> 
>> 3. How does libhdfs.so and libhdfs.so.0 come into picture in all this? We
>> 
>> have provided links to them in build/libhdfs; how do they work?
>> 
>> 
>> 
>> These are just standard results from libtool.  Linux will link against
>> libhdfs.so, but libtool typically makes that libhdfs.so -> libhdfs.so.0 ->
>> libhdfs.so.0.0.
>> 
>> Brian
>> 
>> 
>> Any help with these issues would be highly appreciated.
>> 
>> 
>> Thanks,
>> 
>> Aastha.
>> 
>> 
>> 
>> --
>> 
>> Aastha Mehta
>> 
>> Intern, NetApp, Bangalore
>> 
>> 4th year undergraduate, BITS Pilani
>> 
>> E-mail: aasth...@gmail.com
>> 
>> 
>> 
>> 
> 
> 
> -- 
> Aastha Mehta
> Intern, NetApp, Bangalore
> 4th year undergraduate, BITS Pilani
> E-mail: aasth...@gmail.com


[bbockelm@t3-sl5 ~]$ cat /usr/bin/hdfs
#!/bin/bash

/sbin/modprobe fuse

export HADOOP_HOME=/usr/lib/hadoop-0.20

if [ -f /etc/default/hadoop-0.20-fuse ] 
        then . /etc/default/hadoop-0.20-fuse
fi

if [ -f $HADOOP_HOME/bin/hadoop-config.sh ] 
        then . $HADOOP_HOME/bin/hadoop-config.sh  
fi

if [ "$LD_LIBRARY_PATH" = "" ]
        then JVM_LIB=`find ${JAVA_HOME}/jre/lib -name libjvm.so |tail -n 1`
        export LD_LIBRARY_PATH=`dirname $JVM_LIB`:/usr/lib/

fi
for i in ${HADOOP_HOME}/*.jar ${HADOOP_HOME}/lib/*.jar
        do CLASSPATH+=$i:
done

export PATH=$PATH:${HADOOP_HOME}/bin/
CLASSPATH=/etc/hadoop-0.20/conf:$CLASSPATH
env CLASSPATH=$CLASSPATH ${HADOOP_HOME}/bin/fuse_dfs $@


Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to