On Thu, 1 Mar 2007, Joachim Schipper wrote:

> It might not actually be OpenBSD's fault.
> 
> I tried the following script, on a NFS share mounted from a Feb 17
> 4.1-beta/i386 box to another such box (this is a self-compiled version;
> the second has RAIDframe built in):
> 
> #!/bin/sh
> 
> for j in `jot 101 0 100`; do
>       echo "`date '+%H%M%S'`: ${j}000";
>       for i in `jot 1000 0 1000`; do  # Yes, I realize that should
>                                       # have been 1 now... still, not
>                                       # a problem
>               mkdir /home/joachim/www/test-$j-$i || {
>                   echo "mkdir $j-$i failed";
>                       exit 1; }
>       done
>       ls /home/joachim/www/testdir >/home/joachim/out
> done
> 
> The result was:
> 
> 000602: 0000
> 000612: 1000
> 000630: 2000
> 000643: 3000
> 000653: 4000
> 000710: 5000
> 000726: 6000
> 000745: 7000
> 000807: 8000
> 000830: 9000
> 000902: 10000
> 000918: 11000
> 000947: 12000
> 001002: 13000
> 001017: 14000
> 001034: 15000
> 001102: 16000
> 001117: 17000
> 001138: 18000
> 001154: 19000
> 001221: 20000
> 001238: 21000
> 001252: 22000
> 001307: 23000
> 001334: 24000
> 001349: 25000
> 001404: 26000
> 001419: 27000
> 001445: 28000
> 001501: 29000
> 001516: 30000
> 001531: 31000
> 001557: 32000
> mkdir: /home/joachim/www/test-32-763: Input/output error
> mkdir 32-763 failed
> 
> This was on an almost-empty filesystem exported via NFS. The I/O error
> occured when some table got too big, as I can create files and
> directories other than on the root, and files on the root, but no
> directories. This is true both via NFS and directly on the host. (It
> appears there are 32765 directories on the root at this moment; this is
> almost precisely 50% of all inodes available on the filesystem, which
> may or may not mean anything.)
> 
> Also, the ls command in sftp works fine, both when sftp'ing to localhost
> and to the NFS server.
> 
> In this case, the mount options are:
> 
> calliope:/var/www/users/joachim on /home/joachim/www type nfs (nodev, nosuid, 
> v3, udp, soft, intr, timeo=100)
> 
> Of course, everything does take a while; ls via sftp isn't particularly
> fast, for example. (Looking at the various blinkenlights, I'd venture a
> guess that this particular process is disk bound, especially towards the
> end, which seems logical. Note that the mounting machine is *much*
> faster than the server.)

A directory (or any file) can not have more than 2^15-1 hard links to
it. In the inode, di_nlink is a signed 16 bits type.

For directories, you have the . and .. link, so that leaves room for
32765 files and subdirectories. So (after fixing the counts in your
script ;-) I get (after about 2 minutes):

mkdir: lots/test-32-766: Too many links
mkdir 32-766 failed

I ran this locally on my file server. 

The directory with 32765 dirs in it can be listed witout problems,
both locally and on a NFS client. This is using an UDP mount. 

All this running current, the OP didn't say which version he is
running. There have been NFS related fixes put in the tree during this
year, iirc. 

        -Otto

Reply via email to