On Fri, Feb 10, 2017 at 8:14 PM, Jakub Kicinski
<jakub.kicin...@netronome.com> wrote:
> On Fri, 10 Feb 2017 18:21:25 +0200, Or Gerlitz wrote:
>> On Fri, Feb 10, 2017 at 3:34 AM, Jakub Kicinski wrote:
>> > On Thu,  9 Feb 2017 17:38:43 +0200, Or Gerlitz wrote:
>> >> Running with CONFIG_PREEMPT set, I get a
>> >>
>> >> BUG: using smp_processor_id() in preemptible [00000000] code: tc/3793
>> >>
>> >> asserion from the TC action (mirred) stats_update callback, when the do
>> >>
>> >>       _bstats_cpu_update(this_cpu_ptr(a->cpu_bstats), bytes, packets)
>> >>
>> >> As done by commit 66860be "nfp: bpf: allow offloaded filters to update 
>> >> stats",
>> >> disabling/enabling preemption around the TC upcall solves that.
>> >>
>> >> Fixes: aad7e08d39bd ('net/mlx5e: Hardware offloaded flower filter 
>> >> statistics support')
>> >> Signed-off-by: Or Gerlitz <ogerl...@mellanox.com>
>> >> ---
>> >>
>> >> I marked it as RFC, since I wasn't fully sure on the nature of the
>> >> problem, nor if this is the direction we should take to the fix.
>>
>> > I think it's the right fix
>>
>> Do you under the problem? what's wrong with the call done in the TC
>> action code w.r.t preemption?
>>
>> does it make sense to do this (say) 100K times/sec?
>
> TC actions have pre-cpu stats, referencing them has to be done with
> preemption disabled.

yeah, looking on Documentation/this_cpu_ops.txt I see they mention that, thanks!

I will re-submit the patch with the RFC mark dropped for net.


> Let's CC Jamal and Cong - maybe there are some
> more clever things we could do here?  The situation in a nutshell is
> that the offload drivers read the stats from HW and want to write them
> back to the TC action stats.  The writeback happens in process context
> when user requests stats dump (potentially for multiple actions but we
> currently would just iterate over all actions in driver code).

good, I will be happy to hear if Jamal/Cong/anyone has a suggestion how to maybe
generalize things here to make it more generic for offloading drivers to use.

Reply via email to