* Waiman Long <waiman.l...@hp.com> wrote:

> On 04/10/2013 06:31 AM, Ingo Molnar wrote:
> >* Waiman Long<waiman.l...@hp.com>  wrote:
> >
> >>>That said, the MUTEX_SHOULD_XCHG_COUNT macro should die. Why shouldn't all
> >>>architectures just consider negative counts to be locked? It doesn't matter
> >>>that some might only ever see -1.
> >>I think so too. However, I don't have the machines to test out other
> >>architectures. The MUTEX_SHOULD_XCHG_COUNT is just a safety measure to make 
> >>sure
> >>that my code won't screw up the kernel in other architectures. Once it is
> >>confirmed that a negative count other than -1 is fine for all the other
> >>architectures, the macro can certainly go.
> >I'd suggest to just remove it in an additional patch, Cc:-ing
> >linux-a...@vger.kernel.org. The change is very likely to be fine, if not 
> >then it's
> >easy to revert it.
> >
> >Thanks,
> >
> >     Ingo
>
> Yes, I can do that. So can I put your name down as reviewer or ack'er for the 
> 1st patch?

Since I'll typically the maintainer applying & pushing kernel/mutex.c changes 
to 
Linus via the locking tree, the commit will get a Signed-off-by from me once 
you 
resend the latest state of things - no need to add my Acked-by or Reviewed-by 
right now.

I'm still hoping for another patch from you that adds queueing to the spinners 
... 
That approach could offer better performance than current patches 1,2,3. In 
theory.

I'd prefer that approach because you have a testcase that shows the problem and 
you are willing to maximize performance with it - so we could make sure we have 
reached maximum performance instead of dropping patches #2, #3, reaching 
partial 
performance with patch #1, without having a real full resolution.

Thanks,

        Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to