:That's because any "consensus" would be inappropriate for mass consumtion.
:It really depends on a lot of fun things like the average file size and the
:number of files that the drives will be storing. For example, a mail server
:might want more inodes than a database server. The mail server will likely
:have a lot of tiny files where the database server would have a collection
:of much larger (a few k vs several mb's each).
:
:What makes you think the defaults are unreasonable? I set up a 300GB
:filesystem a few months ago. I ran a few numbers, calculated my average file
:size, compared it to the defaults and found they were very close to
:reasonable. When I get a couple hundred gig's of data on there I'll know
:better but I think my guess-timates are very good.
:
:Matt
:
:> -----Original Message-----
:> From: A G F Keahan [mailto:[EMAIL PROTECTED]]
:> Sent: Wednesday, December 06, 2000 7:53 PM
:> To: [EMAIL PROTECTED]
:> Subject: Optimal UFS parameters
:>
:> What parameters should I choose for a large (say, 60 or 80Gb)
:> filesystem? I remember a while ago someone (phk?) conducted a survey,
:> but nothing seems to have come of it. In the meantime, the capacity of
:> an average hard drive has increased tenfold, and the defaults have
:> become even less reasonable.
:>
:> What's the current consensus of opinion?
:>
:> newfs -b ????? -f ????? -c ?????
Well, in general I think the defaults are a little overkill... but
that may be a good thing. I don't recall us ever getting more then
a handful of complaints about a filesystem running out of inodes.
Running out of inodes is really annoying and it is best to avoid it.
Still, unless your large partition is being used for something like,
oh, /home in a multi-user environment, you can probably optimize
the newfs parameters a bit to reduce fsck times and indirect block lookup
overhead.
The default filesystem parameters are:
newfs -f 1024 -b 8192 -i 8192 -c 16 ...
If you are not going to have a lot of tiny files I would recommend
something like this:
newfs -f 2048 -b 16384 -i 16384 -c 32 ...
You can play with -c and -i, but for a production system the block
size (-b) should be either 8192 (the default), or 16384. The
filesystem buffer cache is only tuned well for those two sizes
and going larger won't help anyway since the kernel already clusters
adjacent blocks.
Doubling -i from the default halves the number of inodes available.
Doubling the cylinders per group reduces the number of allocation
groups. If you reduce the number of groups too much your filesystems
will become more prone to fragmentation, so don't go overboard. If
you increase the number of bytes/inode (-i) too much the filesystem
will not have enough inodes and you will run out.
For a general purpose filesystem I would not go above -i 16384 -c 64.
If the filesystem is going to house a big database (which has many
fewer files), you can use a much larger -i but you still shouldn't
go overboard with -c.
-Matt
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message