Revising my position: v4 will only have comments update (Ben) and new
cpu_rmap_get inetrnal helper (Josh). IMHO, further API rework
discussed with Ben and Eric ought to be in a later patch, as they
affect drivers.
--
David Decotigny
On Wed, Jan 2, 2013 at 12:34 PM, David Decotigny wrote:
> Than
Thanks,
I appreciate your review and clarification, as I was afraid there
could be something I missed. I will send v4 of this patch: an update
with the cpu_rmap_get helper that Josh suggested and the
"free_cpu_rmap" alias removed.
Regards,
--
David Decotigny
On Wed, Jan 2, 2013 at 12:29 PM, Ben
On Sat, 2012-12-29 at 12:36 -0800, Josh Triplett wrote:
> On Sat, Dec 29, 2012 at 11:57:09AM -0800, David Decotigny wrote:
> > In some cases, free_irq_cpu_rmap() is called while holding a lock
> > (eg. rtnl). This can lead to deadlocks, because it invokes
> > flush_scheduled_work() which ends up wa
On Sat, 2012-12-29 at 11:57 -0800, David Decotigny wrote:
> In some cases, free_irq_cpu_rmap() is called while holding a lock
> (eg. rtnl).
I made fairly sure that it didn't get called while holding the RTNL
lock. However it looks like some mlx4_en ethtool ops now call it
(indirectly).
> This ca
On Sat, Dec 29, 2012 at 11:57:09AM -0800, David Decotigny wrote:
> In some cases, free_irq_cpu_rmap() is called while holding a lock
> (eg. rtnl). This can lead to deadlocks, because it invokes
> flush_scheduled_work() which ends up waiting for whole system
> workqueue to flush, but some pending wo
In some cases, free_irq_cpu_rmap() is called while holding a lock
(eg. rtnl). This can lead to deadlocks, because it invokes
flush_scheduled_work() which ends up waiting for whole system
workqueue to flush, but some pending works might try to acquire the
lock we are already holding.
This commit us
6 matches
Mail list logo