On Tue, Jul 23, 2019 at 05:16:25PM +0300, Ido Schimmel wrote: > On Tue, Jul 23, 2019 at 08:43:40AM -0400, Neil Horman wrote: > > On Mon, Jul 22, 2019 at 09:31:32PM +0300, Ido Schimmel wrote: > > > +static void net_dm_packet_work(struct work_struct *work) > > > +{ > > > + struct per_cpu_dm_data *data; > > > + struct sk_buff_head list; > > > + struct sk_buff *skb; > > > + unsigned long flags; > > > + > > > + data = container_of(work, struct per_cpu_dm_data, dm_alert_work); > > > + > > > + __skb_queue_head_init(&list); > > > + > > > + spin_lock_irqsave(&data->drop_queue.lock, flags); > > > + skb_queue_splice_tail_init(&data->drop_queue, &list); > > > + spin_unlock_irqrestore(&data->drop_queue.lock, flags); > > > + > > These functions are all executed in a per-cpu context. While theres nothing > > wrong with using a spinlock here, I think you can get away with just doing > > local_irqsave and local_irq_restore. > > Hi Neil, > > Thanks a lot for reviewing. I might be missing something, but please > note that this function is executed from a workqueue and therefore the > CPU it is running on does not have to be the same CPU to which 'data' > belongs to. If so, I'm not sure how I can avoid taking the spinlock, as > otherwise two different CPUs can modify the list concurrently. > Ah, my bad, I was under the impression that the schedule_work call for that particular work queue was actually a call to schedule_work_on, which would have affined it to a specific cpu. That said, looking at it, I think using schedule_work_on was my initial intent, as the work queue is registered per cpu. And converting it to schedule_work_on would allow you to reduce the spin_lock to a faster local_irqsave
Otherwise though, this looks really good to me Neil > > > > Neil > > > > > + while ((skb = __skb_dequeue(&list))) > > > + net_dm_packet_report(skb); > > > +} >