On Mon, Feb 11, 2019 at 11:35:24AM -0500, Waiman Long wrote:
> On 02/11/2019 06:58 AM, Peter Zijlstra wrote:
> > Which is clearly worse. Now we can write that as:
> >
> > int __down_read_trylock2(unsigned long *l)
> > {
> > long tmp = READ_ONCE(*l);
> >
> > while (tmp >= 0) {
> >
On 02/11/2019 06:58 AM, Peter Zijlstra wrote:
> Which is clearly worse. Now we can write that as:
>
> int __down_read_trylock2(unsigned long *l)
> {
> long tmp = READ_ONCE(*l);
>
> while (tmp >= 0) {
> if (try_cmpxchg(l, &tmp, tmp + 1))
>
On 02/11/2019 05:39 AM, Ingo Molnar wrote:
> * Ingo Molnar wrote:
>
>> Sounds good to me - I've merged this patch, will push it out after
>> testing.
> Based on Peter's feedback I'm delaying this - performance testing on at
> least one key ll/sc arch would be nice indeed.
>
> Thanks,
>
> I
On Sun, Feb 10, 2019 at 09:00:50PM -0500, Waiman Long wrote:
> +static inline int __down_read_trylock(struct rw_semaphore *sem)
> +{
> + long tmp;
> +
> + while ((tmp = atomic_long_read(&sem->count)) >= 0) {
> + if (tmp == atomic_long_cmpxchg_acquire(&sem->count, tmp,
> +
On Mon, Feb 11, 2019 at 10:40:44AM +0100, Peter Zijlstra wrote:
> On Mon, Feb 11, 2019 at 10:36:01AM +0100, Peter Zijlstra wrote:
> > On Sun, Feb 10, 2019 at 09:00:50PM -0500, Waiman Long wrote:
> > > +static inline int __down_read_trylock(struct rw_semaphore *sem)
> > > +{
> > > + long tmp;
> > >
* Will Deacon wrote:
> On Mon, Feb 11, 2019 at 11:39:27AM +0100, Ingo Molnar wrote:
> >
> > * Ingo Molnar wrote:
> >
> > > Sounds good to me - I've merged this patch, will push it out after
> > > testing.
> >
> > Based on Peter's feedback I'm delaying this - performance testing on at
> >
On Mon, Feb 11, 2019 at 11:39:27AM +0100, Ingo Molnar wrote:
>
> * Ingo Molnar wrote:
>
> > Sounds good to me - I've merged this patch, will push it out after
> > testing.
>
> Based on Peter's feedback I'm delaying this - performance testing on at
> least one key ll/sc arch would be nice inde
* Ingo Molnar wrote:
> Sounds good to me - I've merged this patch, will push it out after
> testing.
Based on Peter's feedback I'm delaying this - performance testing on at
least one key ll/sc arch would be nice indeed.
Thanks,
Ingo
On Mon, Feb 11, 2019 at 10:36:01AM +0100, Peter Zijlstra wrote:
> On Sun, Feb 10, 2019 at 09:00:50PM -0500, Waiman Long wrote:
> > +static inline int __down_read_trylock(struct rw_semaphore *sem)
> > +{
> > + long tmp;
> > +
> > + while ((tmp = atomic_long_read(&sem->count)) >= 0) {
> > +
On Sun, Feb 10, 2019 at 09:00:50PM -0500, Waiman Long wrote:
> diff --git a/kernel/locking/rwsem.h b/kernel/locking/rwsem.h
> index bad2bca..067e265 100644
> --- a/kernel/locking/rwsem.h
> +++ b/kernel/locking/rwsem.h
> @@ -32,6 +32,26 @@
> # define DEBUG_RWSEMS_WARN_ON(c)
> #endif
>
> +/*
> +
* Waiman Long wrote:
> On 02/10/2019 09:00 PM, Waiman Long wrote:
> > As the generic rwsem-xadd code is using the appropriate acquire and
> > release versions of the atomic operations, the arch specific rwsem.h
> > files will not be that much faster than the generic code as long as the
> > atom
On 02/10/2019 09:00 PM, Waiman Long wrote:
> As the generic rwsem-xadd code is using the appropriate acquire and
> release versions of the atomic operations, the arch specific rwsem.h
> files will not be that much faster than the generic code as long as the
> atomic functions are properly implement
12 matches
Mail list logo