Nick Piggin wrote on Tuesday, April 12, 2005 4:09 AM
> Chen, Kenneth W wrote:
> > I like the patch a lot and already did bench it on our db setup.  However,
> > I'm seeing a negative regression compare to a very very crappy patch (see
> > attached, you can laugh at me for doing things like that :-).
>
> OK - if we go that way, perhaps the following patch may be the
> way to do it.

OK, if you are going to do it that way, then the ioc_batching code in 
get_request
has to be reworked.  We never push the queue so hard that it kicks itself into 
the
batching mode.  However, calls to get_io_context and put_io_context are 
unconditional
in that function.  Execution profile shows that these two little functions 
actually
consumed lots of cpu cycles.

AFAICS, ioc_*batching() is trying to push more requests onto the queue that is 
full
(or near full) and give high priority to the process that hits the last req 
slot.
Why do we need to go all the way to tsk->io_context to keep track of that state?
For a clean up bonus, I think the tracking can be moved into the queue 
structure.


> > My first reaction is that the overhead is in wait queue setup and tear down
> > in get_request_wait function. Throwing the following patch on top does 
> > improve
> > things a bit, but we are still in the negative territory.  I can't explain 
> > why.
> > Everything suppose to be faster.  So I'm staring at the execution profile at
> > the moment.
> >
>
> Hmm, that's a bit disappointing. Like you said though, I'm sure we
> should be able to get better performance out of this.

Absolutely. I'm disappointed too and this is totally out of expectation.  There
must be some other factors.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to