:(1) If the directory file is less than one page, there will be a waste of
:memory due to internal fragmentation. Why do not we set a limit, say one
:page, on when we start VMIO a directory?
It is true that when we use VMIO to back directories that a minimum of
one page of memory is used even for a small directory. However, unlike
B_MALLOC space in the buffer cache the VM page cache is much better
suited towards figuring out when cached VM pages can be reused. So
even though there is waste the page may still be reused by the system
fairly quickly if need be. When we use B_MALLOC space in the buffer
to store a small directory 'efficiently', it tends to get reused too
quickly due to the small size of the buffer cache which results in
another physical I/O the next time the directory needs to be accessed.
Given the choice being some wasteage (which is less then you think)
and having to do another physical I/O, it is clear that the advantage
is to keep the waste and avoid the physical I/O.
:(2) If VMIO directory is not desirable for some reasons, how about bump up
:the usecount of the buffer used by a directory file to let it stay in the
:queue longer?
This is how the old algorithm worked. It failed utterly to address
the problem and in fact led to a considerable amount of complexity and
wasted cpu cycles when the buffer cache became unbalanced (due
to excessive write loading or directory scanning loading).
:(3) Or maybe we can add a parameter to the filesytem, telling it to try to
:preallocate some contiguous disk space for all directory files. I guess
:that the cost per bit on disk is less than the cost per bit in memory.
I believe the filesystem already does this.
-Matt
Matthew Dillon
<[EMAIL PROTECTED]>
:Can anyone give me an idea on how big a directory could be in some
:environment?
:
:Any comments or ideas are appreciated.
:
:-Zhihui
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message