On 2/21/2021 1:34 PM, Fotis Panagiotopoulos wrote:
You would not want to use the high priority work queue; that would
interfere with real time behavior. Nor would you want to use the low
priority work queue because that is used by most network drivers and so
would probably result in deadlocks.
Oh, I wouldn't imagine that... That's bad.
Why no create your own dedicated kernel thread?
Because I am quite short of RAM, and I wouldn't like to waste any memory in
a thread's stack.
I would prefer to somehow reuse RAM, as the workers do.
Will a deadlock in LPWORK happen even if CONFIG_SCHED_LPNTHREADS == 1?
Do I have any other way to poll the PHY?
It might not deadlock if CONFIG_SCHED_LPNTHREADS > 1. It depends on the
particular sequence of events and how the locks are held and the fact
that work queues are FIFO. If CONFIG_SCHED_LPNTHREADS > 1, then things
will no longer run FIFO. That might be worth a try.
Fo CONFIG_SCHED_LPNTHREADS > 1, I would expect the sequence of events like:
1. Network operation starts on an LP thread.
2. It locks the network and initiates and operation
3. Then it unlocks the network and waits for the response (hanging that
first LP thread)
4. The driver responds and runs on a different LP thread. That logic
locks the network, provides the response, wakes up the thread, then
unlocks then network.
5. The first LP thread runs, retakes the lock, and completes the operation.
6. The first LP thread then release the lock and terminates.
But I might be missing something so that is risky. If
CONFIG_SCHED_LPNTHREADS == 1, then the single LP thread is blocked at
step 3 and the driver response cannot be received and that would lead to
the deadlock.
I don't know of any other way to do that other than with a dedicated
kernal thread and some timer. Wouldn't you have this same issue if you
were implementing this logic in application space? No real difference
other than the form of the APIs.