> I thought I fixed this for good in
> commit 114276ac0a3beb9c391a410349bd770653e185ce
> Author: Michael S. Tsirkin
> Date: Sun May 26 17:32:13 2013 +0300
> mm, sched: Drop voluntary schedule from might_fault()
You're right this was an older kernel. So you already fi
On Fri, Aug 09, 2013 at 04:04:07PM -0700, Andi Kleen wrote:
> The x86 user access functions (*_user) were originally very well tuned,
> with partial inline code and other optimizations.
>
> Then over time various new checks -- particularly the sleep checks for
> a voluntary preempt kernel -- destr
Andi,
You _again_ 'forgot' to Cc: peterz who is an affected maintainer and who
is keenly interested in such low level changes affecting scheduling - and
he asked to be Cc:-ed on your previous submission.
I still don't understand, why do you *routinely* do office politics crap
like that, playi
On Tue, Aug 13, 2013 at 11:09:21AM -0700, H. Peter Anvin wrote:
> On 08/09/2013 04:04 PM, Andi Kleen wrote:
> > The x86 user access functions (*_user) were originally very well tuned,
> > with partial inline code and other optimizations.
> >
> > Then over time various new checks -- particularly th
On 08/09/2013 04:04 PM, Andi Kleen wrote:
> The x86 user access functions (*_user) were originally very well tuned,
> with partial inline code and other optimizations.
>
> Then over time various new checks -- particularly the sleep checks for
> a voluntary preempt kernel -- destroyed a lot of the
On Sat, 2013-08-10 at 21:57 -0700, H. Peter Anvin wrote:
> That sounds like an issue with specific preemption policies.
Actually, voluntary/nopreempt delta for _these_ loads was nil.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ma
That sounds like an issue with specific preemption policies.
Mike Galbraith wrote:
>On Sat, 2013-08-10 at 21:27 -0700, H. Peter Anvin wrote:
>> On 08/10/2013 09:17 PM, Mike Galbraith wrote:
>> >>
>> >> Do you have any quantification of "munches throughput?" It seems
>odd
>> >> that it would be
On Sat, 2013-08-10 at 21:27 -0700, H. Peter Anvin wrote:
> On 08/10/2013 09:17 PM, Mike Galbraith wrote:
> >>
> >> Do you have any quantification of "munches throughput?" It seems odd
> >> that it would be worse than polling for preempt all over the kernel, but
> >> perhaps the additional locking
On 08/10/2013 09:17 PM, Mike Galbraith wrote:
>>
>> Do you have any quantification of "munches throughput?" It seems odd
>> that it would be worse than polling for preempt all over the kernel, but
>> perhaps the additional locking is what costs.
>
> I hadn't compared in ages, so made some fresh s
On Sat, 2013-08-10 at 09:09 -0700, H. Peter Anvin wrote:
> On 08/09/2013 10:55 PM, Mike Galbraith wrote:
> >>
> >> Now, here is a bigger question: shouldn't we be deprecating/getting rid
> >> of PREEMPT_VOUNTARY in favor of PREEMPT?
> >
> > I sure hope not, PREEMPT munches throughput. If you nee
On 08/10/2013 11:51 AM, Linus Torvalds wrote:
> Note that you still want the *test* to be done in C code, because together
> with "unlikely()" you'd likely do pretty close to optimal code
> generation, and hiding the decrement and test and conditional jump in
> asm you wouldn't get the proper instr
On 08/10/2013 11:51 AM, Linus Torvalds wrote:
> That "kernel_stack" thing is actually getting the thread_info pointer,
> and it doesn't get cached because gcc thinks the preempt_count value
> might alias.
This is just plain braindamaged. Somewhere on my list of things is to
merge thread_info and
Right... I mentioned the need to move thread count into percpu and the other
restructuring... all of that seems essential for this not to suck.
Linus Torvalds wrote:
>On Sat, Aug 10, 2013 at 10:18 AM, H. Peter Anvin wrote:
>>
>> We could then play a really ugly stunt by marking NEED_RESCHED by
On Sat, Aug 10, 2013 at 10:18 AM, H. Peter Anvin wrote:
>
> We could then play a really ugly stunt by marking NEED_RESCHED by adding
> 0x7fff to the counter. Then the whole sequence becomes something like:
>
> subl $1,%fs:preempt_count
> jno 1f
> call __naked_preempt_s
On 08/10/2013 09:43 AM, Linus Torvalds wrote:
> On Sat, Aug 10, 2013 at 9:09 AM, H. Peter Anvin wrote:
>>
>> Do you have any quantification of "munches throughput?" It seems odd
>> that it would be worse than polling for preempt all over the kernel, but
>> perhaps the additional locking is what c
On Sat, Aug 10, 2013 at 9:09 AM, H. Peter Anvin wrote:
>
> Do you have any quantification of "munches throughput?" It seems odd
> that it would be worse than polling for preempt all over the kernel, but
> perhaps the additional locking is what costs.
Actually, the big thing for true preemption i
On 08/09/2013 10:55 PM, Mike Galbraith wrote:
>>
>> Now, here is a bigger question: shouldn't we be deprecating/getting rid
>> of PREEMPT_VOUNTARY in favor of PREEMPT?
>
> I sure hope not, PREEMPT munches throughput. If you need PREEMPT, seems
> to me what you _really_ need is PREEMPT_RT (the rea
On Fri, 2013-08-09 at 21:42 -0700, H. Peter Anvin wrote:
> On 08/09/2013 04:04 PM, Andi Kleen wrote:
> >
> > This patch kit is an attempt to get us back to sane code,
> > mostly by doing proper inlining and doing sleep checks in the right
> > place. Unfortunately I had to add one tree sweep to a
On 08/09/2013 04:04 PM, Andi Kleen wrote:
>
> This patch kit is an attempt to get us back to sane code,
> mostly by doing proper inlining and doing sleep checks in the right
> place. Unfortunately I had to add one tree sweep to avoid an nasty
> include loop.
>
> It costs a bit of text space, but
19 matches
Mail list logo