:
:You are proposing replacing the current buffer locks with two separate
:locks, one for ownership and the other for I/O. Frankly, I do not find
:this simpler or easier to understand than what we have now. I also
The simplicity that I am requesting is that we do not add all these
BUF_KERNPROC calls all over the code base. I am requesting that we
find a solution that does not require that, because the way the
BUF_KERNPROC stuff is being now is extremely fragile, unnecessarily
changes the VOP API by requiring yet another precondition, and has
already caused us significant problems. If the only people who can
track down and fix these problems are you, me, and Peter, then what
is going to happen when someone else tries to do something with the VFS
subsystem later on?
The solution can be as simple as encapsulating it as part of something else
that is already part of the buffer subsystem. If we can remove those
calls from the device drivers I will be much happier. How are we supposed
to be able to extend the code when we continue to add pre-conditions to
VOP_*() calls? I don't like the idea even if proper documentation were
adding.
At the moment your commit contains a lot of hacks to handle situations that
have not yet occured. I understand why you are doing it... you believe
that the situations will occur when you implement the softupdates portion
of the changes. But I do not think adding those hacks now is a good
idea. I would rather that some additional thought be put into the
lock recursion you intended for softupdates in order to make these hacks
not-necessary.
:take some issue with the cost of the lockmgr code. It is large, but
:the critical paths through it are pretty short (it was derived from
:the MACH lock code which had been pretty well tuned). There is some
It is not short. I've measured it. lockmgr() locks are so expensive
that using them in the critical path at all is creating a serious
performance problem for us. Even things like namei() calls would more
then double in speed if we could reduce the overhead of the lockmgr()
calls. The qlock's are about 10 times faster then the lockmgr() locks,
possibly even more. It's that bad.
:I will also note that adding the surrounding splbio/splx to all the
:BUF_ macros nearly doubled the cycle count on those functions. As
:nearly every instance is already splbio protected (since the B_BUSY
:and B_WANTED code also required protection to avoid races), it would
:make a big performance improvement to just go make sure that all the
:BUF_ calls are already protected rather than needlessly add those splbio
:and splx calls.
:
: Kirk
I agree with you there. I don't think the splbio*() calls are really
an issue from a design standpoint because the problem is encapsulated...
it exists is only one place, the macro in sys/buf.h. Given the choice
betwen a hack that exists in one place ( the splbio*() stuff ), and a
hack that is strewn all over the codebase ( the BUF_KERNPROC() stuff ),
I'll take the hack that exists in one place every time.
I would also like to point out that none of these side effects or
prerequisits have been documented at al..
-Matt
Matthew Dillon
<[EMAIL PROTECTED]>
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message