On Tue, Jul 23, 2019 at 08:43:40AM -0400, Neil Horman wrote: > On Mon, Jul 22, 2019 at 09:31:32PM +0300, Ido Schimmel wrote: > > +static void net_dm_packet_work(struct work_struct *work) > > +{ > > + struct per_cpu_dm_data *data; > > + struct sk_buff_head list; > > + struct sk_buff *skb; > > + unsigned long flags; > > + > > + data = container_of(work, struct per_cpu_dm_data, dm_alert_work); > > + > > + __skb_queue_head_init(&list); > > + > > + spin_lock_irqsave(&data->drop_queue.lock, flags); > > + skb_queue_splice_tail_init(&data->drop_queue, &list); > > + spin_unlock_irqrestore(&data->drop_queue.lock, flags); > > + > These functions are all executed in a per-cpu context. While theres nothing > wrong with using a spinlock here, I think you can get away with just doing > local_irqsave and local_irq_restore.
Hi Neil, Thanks a lot for reviewing. I might be missing something, but please note that this function is executed from a workqueue and therefore the CPU it is running on does not have to be the same CPU to which 'data' belongs to. If so, I'm not sure how I can avoid taking the spinlock, as otherwise two different CPUs can modify the list concurrently. > > Neil > > > + while ((skb = __skb_dequeue(&list))) > > + net_dm_packet_report(skb); > > +}