On Thu, 12 Sep 2013, Mathieu Desnoyers wrote:
> * Peter Zijlstra (pet...@infradead.org) wrote:
> [...]
> > Yep, that's good. I suppose if there's multiple use sites we can jump
> > through another few hoops to get rid of the specific struct foo
> > assumptions by storing sizeof() whatever we do use and playing pointer
> > math games.
> > 
> > But for now with the time stuff as only user this looks ok.
> 
> OK! Here is the full implementation of the idea against Linux
> timekeeper, ntp, and PPS. It appears that ntp and PPS were relying on
> the timekeeper seqlock too. And guess what, after booting my laptop with
> this kernel there still no smoke coming out of it after a good 5 minutes
> of testing. ;-)
> 
> Comments are welcome.

First of all this needs to be split into several patches.

>  static void update_wall_time(void)
 ...
> -     write_seqcount_begin(&timekeeper_seq);
>       /* Update clock->cycle_last with the new value */
>       clock->cycle_last = tk->cycle_last;
> -     /*
> -      * Update the real timekeeper.
> -      *
> -      * We could avoid this memcpy by switching pointers, but that
> -      * requires changes to all other timekeeper usage sites as
> -      * well, i.e. move the timekeeper pointer getter into the
> -      * spinlocked/seqcount protected sections. And we trade this
> -      * memcpy under the timekeeper_seq against one before we start
> -      * updating.
> -      */
> -     memcpy(real_tk, tk, sizeof(*tk));
> -     timekeeping_update(real_tk, action);

So you're dropping the timekeeping_update() call here. I wonder how
this works.

> +     timekeeper_write_begin(&latch_timekeeper, &prev, &tk);
> +     *tk = *prev;

Do we really need to do the copy on all call sites, instead of doing
it in timekeeper_write_begin() ?

Thanks,

        tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to