tree:   https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 
master
head:   c049d56eb219661c9ae48d596c3e633973f89d1f
commit: 4109a2c3b91e5f38e401fc4ea56848e65e429785 [292/294] tipc: 
tipc_udp_recv() cleanup vs rcu verbs
reproduce:
        # apt-get install sparse
        git checkout 4109a2c3b91e5f38e401fc4ea56848e65e429785
        make ARCH=x86_64 allmodconfig
        make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <l...@intel.com>

sparse warnings: (new ones prefixed by >>)


vim +/tipc_udp_recv +647 include/linux/rcupdate.h

^1da177e4 Linus Torvalds      2005-04-16  599  
^1da177e4 Linus Torvalds      2005-04-16  600  /*
^1da177e4 Linus Torvalds      2005-04-16  601   * So where is rcu_write_lock()? 
 It does not exist, as there is no
^1da177e4 Linus Torvalds      2005-04-16  602   * way for writers to lock out 
RCU readers.  This is a feature, not
^1da177e4 Linus Torvalds      2005-04-16  603   * a bug -- this property is 
what provides RCU's performance benefits.
^1da177e4 Linus Torvalds      2005-04-16  604   * Of course, writers must 
coordinate with each other.  The normal
^1da177e4 Linus Torvalds      2005-04-16  605   * spinlock primitives work well 
for this, but any other technique may be
^1da177e4 Linus Torvalds      2005-04-16  606   * used as well.  RCU does not 
care how the writers keep out of each
^1da177e4 Linus Torvalds      2005-04-16  607   * others' way, as long as they 
do so.
^1da177e4 Linus Torvalds      2005-04-16  608   */
3d76c0829 Paul E. McKenney    2009-09-28  609  
3d76c0829 Paul E. McKenney    2009-09-28  610  /**
ca5ecddfa Paul E. McKenney    2010-04-28  611   * rcu_read_unlock() - marks the 
end of an RCU read-side critical section.
3d76c0829 Paul E. McKenney    2009-09-28  612   *
f27bc4873 Paul E. McKenney    2014-05-04  613   * In most situations, 
rcu_read_unlock() is immune from deadlock.
f27bc4873 Paul E. McKenney    2014-05-04  614   * However, in kernels built 
with CONFIG_RCU_BOOST, rcu_read_unlock()
f27bc4873 Paul E. McKenney    2014-05-04  615   * is responsible for 
deboosting, which it does via rt_mutex_unlock().
f27bc4873 Paul E. McKenney    2014-05-04  616   * Unfortunately, this function 
acquires the scheduler's runqueue and
f27bc4873 Paul E. McKenney    2014-05-04  617   * priority-inheritance 
spinlocks.  This means that deadlock could result
f27bc4873 Paul E. McKenney    2014-05-04  618   * if the caller of 
rcu_read_unlock() already holds one of these locks or
ec84b27f9 Anna-Maria Gleixner 2018-05-25  619   * any lock that is ever 
acquired while holding them.
f27bc4873 Paul E. McKenney    2014-05-04  620   *
f27bc4873 Paul E. McKenney    2014-05-04  621   * That said, RCU readers are 
never priority boosted unless they were
f27bc4873 Paul E. McKenney    2014-05-04  622   * preempted.  Therefore, one 
way to avoid deadlock is to make sure
f27bc4873 Paul E. McKenney    2014-05-04  623   * that preemption never happens 
within any RCU read-side critical
f27bc4873 Paul E. McKenney    2014-05-04  624   * section whose outermost 
rcu_read_unlock() is called with one of
f27bc4873 Paul E. McKenney    2014-05-04  625   * rt_mutex_unlock()'s locks 
held.  Such preemption can be avoided in
f27bc4873 Paul E. McKenney    2014-05-04  626   * a number of ways, for 
example, by invoking preempt_disable() before
f27bc4873 Paul E. McKenney    2014-05-04  627   * critical section's outermost 
rcu_read_lock().
f27bc4873 Paul E. McKenney    2014-05-04  628   *
f27bc4873 Paul E. McKenney    2014-05-04  629   * Given that the set of locks 
acquired by rt_mutex_unlock() might change
f27bc4873 Paul E. McKenney    2014-05-04  630   * at any time, a somewhat more 
future-proofed approach is to make sure
f27bc4873 Paul E. McKenney    2014-05-04  631   * that that preemption never 
happens within any RCU read-side critical
f27bc4873 Paul E. McKenney    2014-05-04  632   * section whose outermost 
rcu_read_unlock() is called with irqs disabled.
f27bc4873 Paul E. McKenney    2014-05-04  633   * This approach relies on the 
fact that rt_mutex_unlock() currently only
f27bc4873 Paul E. McKenney    2014-05-04  634   * acquires irq-disabled locks.
f27bc4873 Paul E. McKenney    2014-05-04  635   *
f27bc4873 Paul E. McKenney    2014-05-04  636   * The second of these two 
approaches is best in most situations,
f27bc4873 Paul E. McKenney    2014-05-04  637   * however, the first approach 
can also be useful, at least to those
f27bc4873 Paul E. McKenney    2014-05-04  638   * developers willing to keep 
abreast of the set of locks acquired by
f27bc4873 Paul E. McKenney    2014-05-04  639   * rt_mutex_unlock().
f27bc4873 Paul E. McKenney    2014-05-04  640   *
3d76c0829 Paul E. McKenney    2009-09-28  641   * See rcu_read_lock() for more 
information.
3d76c0829 Paul E. McKenney    2009-09-28  642   */
bc33f24bd Paul E. McKenney    2009-08-22  643  static inline void 
rcu_read_unlock(void)
bc33f24bd Paul E. McKenney    2009-08-22  644  {
f78f5b90c Paul E. McKenney    2015-06-18  645   
RCU_LOCKDEP_WARN(!rcu_is_watching(),
bde23c689 Heiko Carstens      2012-02-01  646                    
"rcu_read_unlock() used illegally while idle");
bc33f24bd Paul E. McKenney    2009-08-22 @647   __release(RCU);
bc33f24bd Paul E. McKenney    2009-08-22  648   __rcu_read_unlock();
d24209bb6 Paul E. McKenney    2015-01-21  649   
rcu_lock_release(&rcu_lock_map); /* Keep acq info for rls diags. */
bc33f24bd Paul E. McKenney    2009-08-22  650  }
^1da177e4 Linus Torvalds      2005-04-16  651  

:::::: The code at line 647 was first introduced by commit
:::::: bc33f24bdca8b6e97376e3a182ab69e6cdefa989 rcu: Consolidate sparse and 
lockdep declarations in include/linux/rcupdate.h

:::::: TO: Paul E. McKenney <paul...@linux.vnet.ibm.com>
:::::: CC: Ingo Molnar <mi...@elte.hu>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

Reply via email to