> On Mar 26, 2025, at 6:33 PM, Paul E. McKenney <paul...@kernel.org> wrote:
> 
> On Mon, Mar 24, 2025 at 01:01:53PM -0400, Joel Fernandes wrote:
>> The rcu_seq_done_exact() function checks if a grace period has completed by
>> comparing sequence numbers. It includes a guard band to handle sequence 
>> number
>> wraparound, which was previously expressed using the magic number calculation
>> '3 * RCU_SEQ_STATE_MASK + 1'.
>> 
>> This magic number is not immediately obvious in terms of what it represents.
>> 
>> Instead, the reason we need this tiny guardband is because of the lag between
>> the setting of rcu_state.gp_seq_polled and root rnp's gp_seq in 
>> rcu_gp_init().
>> 
>> This guardband needs to be at least 2 GPs worth of counts, to avoid 
>> recognizing
>> the newly started GP as completed immediately, due to the following sequence
>> which arises due to the delay between update of rcu_state.gp_seq_polled and
>> root rnp's gp_seq:
>> 
>> rnp->gp_seq = rcu_state.gp_seq = 0
>> 
>>    CPU 0                                           CPU 1
>>    -----                                           -----
>>    // rcu_state.gp_seq = 1
>>    rcu_seq_start(&rcu_state.gp_seq)
>>                                                    // snap = 8
>>                                                    snap = 
>> rcu_seq_snap(&rcu_state.gp_seq)
>>                                                    // Two full GP differences
>>                                                    
>> rcu_seq_done_exact(&rnp->gp_seq, snap)
>>    // rnp->gp_seq = 1
>>    WRITE_ONCE(rnp->gp_seq, rcu_state.gp_seq);
>> 
>> This can happen due to get_state_synchronize_rcu_full() sampling
>> rcu_state.gp_seq_polled, however the poll_state_synchronize_rcu_full()
>> sampling the root rnp's gp_seq. The delay between the update of the 2
>> counters occurs in rcu_gp_init() during which the counters briefly go
>> out of sync.
>> 
>> Make the guardband explictly 2 GPs. This improves code readability and
>> maintainability by making the intent clearer as well.
>> 
>> Suggested-by: Frederic Weisbecker <frede...@kernel.org>
>> Signed-off-by: Joel Fernandes <joelagn...@nvidia.com>
> 
> One concern is that a small error anywhere in the code could cause this
> minimal guard band to be too small.  This is not a problem for some
> use cases (rcu_barrier() just does an extra operation, and normal grace
> periods are protected from forever-idle CPUs by ->gpwrap), but could be
> an issue on 32-bit systems for user of polled RCU grace periods.

Could you provide more details of the usecase (sequence of steps) causing an 
issue for 32 bit polled RCU users? I am not able to see how this patch can 
affect them.

> 
> In contrast, making the guard band a bit longer than it needs to be
> has little or no downside.

Making it 3 GP instead of 2 should be ok with me as long as we document it but 
at least it will not be a magic number based on an equation. I feel we should 
not put random magic numbers which is more dangerous since it is hard to 
explain (and hence debug — just my 2 cents).

Thanks.

> 
>                            Thanx, Paul
> 
>> ---
>> kernel/rcu/rcu.h | 5 ++++-
>> 1 file changed, 4 insertions(+), 1 deletion(-)
>> 
>> diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
>> index eed2951a4962..5e1ee570bb27 100644
>> --- a/kernel/rcu/rcu.h
>> +++ b/kernel/rcu/rcu.h
>> @@ -57,6 +57,9 @@
>> /* Low-order bit definition for polled grace-period APIs. */
>> #define RCU_GET_STATE_COMPLETED    0x1
>> 
>> +/* A complete grace period count */
>> +#define RCU_SEQ_GP (RCU_SEQ_STATE_MASK + 1)
>> +
>> extern int sysctl_sched_rt_runtime;
>> 
>> /*
>> @@ -162,7 +165,7 @@ static inline bool rcu_seq_done_exact(unsigned long *sp, 
>> unsigned long s)
>> {
>>    unsigned long cur_s = READ_ONCE(*sp);
>> 
>> -    return ULONG_CMP_GE(cur_s, s) || ULONG_CMP_LT(cur_s, s - (3 * 
>> RCU_SEQ_STATE_MASK + 1));
>> +    return ULONG_CMP_GE(cur_s, s) || ULONG_CMP_LT(cur_s, s - (2 * 
>> RCU_SEQ_GP));
>> }
>> 
>> /*
>> --
>> 2.43.0
>> 

Reply via email to