pussuw commented on code in PR #16486:
URL: https://github.com/apache/nuttx/pull/16486#discussion_r2128565957


##########
include/nuttx/spinlock.h:
##########
@@ -499,6 +543,85 @@ irqstate_t spin_lock_irqsave(FAR volatile spinlock_t *lock)
 #  define spin_lock_irqsave(l) ((void)(l), up_irq_save())
 #endif
 
+/****************************************************************************
+ * Name: spin_lock_irqsave_nopreempt
+ *
+ * Description:
+ *   If SMP is enabled:
+ *     Disable local interrupts, sched_lock and take the lock spinlock and
+ *     return the interrupt state.
+ *
+ *     NOTE: This API is very simple to protect data (e.g. H/W register
+ *     or internal data structure) in SMP mode. But do not use this API
+ *     with kernel APIs which suspend a caller thread. (e.g. nxsem_wait)
+ *
+ *   If SMP is not enabled:
+ *     This function is equivalent to up_irq_save() + sched_lock().
+ *
+ * Input Parameters:
+ *   lock - Caller specific spinlock. not NULL.
+ *
+ * Returned Value:
+ *   An opaque, architecture-specific value that represents the state of
+ *   the interrupts prior to the call to spin_lock_irqsave(lock);
+ *
+ ****************************************************************************/
+
+static inline_function
+irqstate_t spin_lock_irqsave_nopreempt(FAR volatile spinlock_t *lock)
+{
+  irqstate_t flags;
+  flags = spin_lock_irqsave(lock);
+  sched_lock();
+  return flags;
+}
+
+/****************************************************************************
+ * Name: rspin_lock_irqsave_noprempt
+ *
+ * Description:
+ *   Nest supported spinlock, can support UINT8_MAX max depth.
+ *   As we should not disable irq for long time, sched also locked.
+ *   Similar feature with enter_critical_section, but isolate by instance.
+ *
+ *   If SPINLOCK is enabled:
+ *     Will take spinlock each cpu first call.
+ *
+ *   If SPINLOCK is not enabled:
+ *     Equivalent to up_irq_save() + sched_lock().
+ *     Will only sched_lock once when first called.
+ *
+ * Input Parameters:
+ *   lock - Caller specific rspinlock_s. not NULL.
+ *
+ * Returned Value:
+ *   An opaque, architecture-specific value that represents the state of
+ *   the interrupts prior to the call to spin_lock_irqsave(lock);
+ *
+ ****************************************************************************/
+
+static inline_function
+irqstate_t rspin_lock_irqsave_noprempt(FAR struct rspinlock_s *lock)
+{
+  /* For race condition, we may get cpuid in stack and then thread
+   * moved to other cpu.  So we have to get cpuid with irq disabled.
+   */
+
+  irqstate_t flags = up_irq_save();
+  int cpu = this_cpu();
+
+  if (lock->holder != cpu)
+    {
+      spin_lock(&lock->lock);

Review Comment:
   Do all platforms implement SMP barrier in spin_lock ? If not, you need to 
add SMP barrier or use atomic_xxx to access / modify lock->holder and 
lock->count.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@nuttx.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to