On Fri, Oct 06, 2023 at 02:12:31PM +0300, Vitaliy Makkoveev wrote:
> I reworked your diff for a little. At firts I use separate
> softclock_thread_mpsafe() for mpsafe timers. I don't think we need to
> bind this thread to the primary CPU.
I also think a separate thread is better.  Makes no sense to grab
and release kernel lock for each timer.  Task uses systq and systqmp
the same way.

Pinning threads to cpu should only be done when we know that it has
a performance benefit in a particular situation.  At the moment I
would prefer a thread that runs anywhere.

> Also, on uniprocessor machine we don't need dedicated mpsafe timers
> processing, so this specific code separated with preprocessor.

Make sense.

> Man page is included to this diff.
> 
> I'm using this with pipex(4) and PF_ROUTE sockets.

I guess there are a bunch of network timeouts we can run without
kernel lock as soon this feature gets in.

> @@ -718,11 +743,13 @@ softclock(void *arg)
>                       softclock_process_tick_timeout(to, new);
>       }
>       tostat.tos_softclocks++;
> -     needsproc = !CIRCQ_EMPTY(&timeout_proc);
> -     mtx_leave(&timeout_mutex);
> -
> -     if (needsproc)
> +     if (!CIRCQ_EMPTY(&timeout_proc))
>               wakeup(&timeout_proc);
> +#ifdef MULTIPROCESSOR
> +     if(!CIRCQ_EMPTY(&timeout_proc_mpsafe))
> +             wakeup(&timeout_proc_mpsafe);
> +#endif
> +     mtx_leave(&timeout_mutex);
>  }
>  
>  void

Was there a good reason that wakeup() did run without mutex?
Do we really want to change this?

> @@ -730,6 +757,11 @@ softclock_create_thread(void *arg)
>  {
>       if (kthread_create(softclock_thread, NULL, NULL, "softclock"))
>               panic("fork softclock");
> +#ifdef MULTIPROCESSOR
> +     if (kthread_create(softclock_thread_mpsafe, NULL, NULL,
> +         "softclock_mpsafe"))
> +             panic("fork softclock_mpsafe");
> +#endif
>  }
>  
>  void

Could you make the visible name of the thread shorter?  e.g.
"softclock_mpsafe" -> "softclockmp", it should fit in ps and top.

bluhm

Reply via email to