On Wed, Feb 28, 2007 at 05:38:28PM +0100, Joaquin Herrero wrote:
> Hi everyone,
> 
> I'm trying to set up a sftp server for my boss using OpenBSD. It will be
> used for heavy work from 10 remote places in the country.
> The file repository is in a Windows 2003 Server, so I have to mount that
> repository to put there the files uploaded.
> As OpenBSD does not allow smbmount I first tried mounting the remote file
> system with sharity-light.
> 
> The file repository has a very crowded top level with some
> 20.000directories in the root directory. I cannot change that.
> 
> With sharity-light the remote machine gets mounted ok, but when I issue a
> "ls" on the root directory, I get a partial list of directories and then the
> listing hangs forever.
> 
> I then installed the "Services for Unix" (SFU) in the Windows Server, and
> mounted the remote drive via NFS:
> 
> # mount -t nfs -o -T winserver:/Data  /mnt/winserver
> 
> Then tried the "ls". Same result: partial list and hangs.
> 
> The NFS and sharity-light works ok for the "normal" operations, such as file
> upload and download and partial directory listing. But as you know, the
> graphical ftp clients show the list of directories in the first screen they
> show, so they are not usable with the OpenBSD machine.
> 
> I tried the same from a Linux machine and it works ok in both tests: with
> smbmount and with a NFS mount. I have to wait some 30 seconds to get the
> full listing but it works ok.
> 
> It seems to me that there's some kind of limit that I am reaching in OpenBSD
> with that "ls". But I am completely lost.
> 
> Can you help me with some ideas, please?

It might not actually be OpenBSD's fault.

I tried the following script, on a NFS share mounted from a Feb 17
4.1-beta/i386 box to another such box (this is a self-compiled version;
the second has RAIDframe built in):

#!/bin/sh

for j in `jot 101 0 100`; do
        echo "`date '+%H%M%S'`: ${j}000";
        for i in `jot 1000 0 1000`; do  # Yes, I realize that should
                                        # have been 1 now... still, not
                                        # a problem
                mkdir /home/joachim/www/test-$j-$i || {
                    echo "mkdir $j-$i failed";
                        exit 1; }
        done
        ls /home/joachim/www/testdir >/home/joachim/out
done

The result was:

000602: 0000
000612: 1000
000630: 2000
000643: 3000
000653: 4000
000710: 5000
000726: 6000
000745: 7000
000807: 8000
000830: 9000
000902: 10000
000918: 11000
000947: 12000
001002: 13000
001017: 14000
001034: 15000
001102: 16000
001117: 17000
001138: 18000
001154: 19000
001221: 20000
001238: 21000
001252: 22000
001307: 23000
001334: 24000
001349: 25000
001404: 26000
001419: 27000
001445: 28000
001501: 29000
001516: 30000
001531: 31000
001557: 32000
mkdir: /home/joachim/www/test-32-763: Input/output error
mkdir 32-763 failed

This was on an almost-empty filesystem exported via NFS. The I/O error
occured when some table got too big, as I can create files and
directories other than on the root, and files on the root, but no
directories. This is true both via NFS and directly on the host. (It
appears there are 32765 directories on the root at this moment; this is
almost precisely 50% of all inodes available on the filesystem, which
may or may not mean anything.)

Also, the ls command in sftp works fine, both when sftp'ing to localhost
and to the NFS server.

In this case, the mount options are:

calliope:/var/www/users/joachim on /home/joachim/www type nfs (nodev, nosuid, 
v3, udp, soft, intr, timeo=100)

Of course, everything does take a while; ls via sftp isn't particularly
fast, for example. (Looking at the various blinkenlights, I'd venture a
guess that this particular process is disk bound, especially towards the
end, which seems logical. Note that the mounting machine is *much*
faster than the server.)

                Joachim

Reply via email to