Tue, May 12, 2020 at 09:15:52AM CEST, vadym.koc...@plvision.eu wrote:
>On Tue, May 12, 2020 at 07:55:36AM +0200, Jiri Pirko wrote:
>> Mon, May 11, 2020 at 09:24:22PM CEST, vadym.koc...@plvision.eu wrote:
>> >On Mon, May 11, 2020 at 02:57:23PM +0200, Jiri Pirko wrote:
>> >> [...]
>> >> 
>> >> >diff --git a/drivers/net/ethernet/marvell/prestera/prestera_dsa.c 
>> >> >b/drivers/net/ethernet/marvell/prestera/prestera_dsa.c
>[...]
>> >> >+netdev_tx_t prestera_sdma_xmit(struct prestera_sdma *sdma, struct 
>> >> >sk_buff *skb)
>> >> >+{
>> >> >+        struct device *dma_dev = sdma->sw->dev->dev;
>> >> >+        struct prestera_tx_ring *tx_ring;
>> >> >+        struct net_device *dev = skb->dev;
>> >> >+        struct prestera_sdma_buf *buf;
>> >> >+        int err;
>> >> >+
>> >> >+        tx_ring = &sdma->tx_ring;
>> >> >+
>> >> >+        buf = &tx_ring->bufs[tx_ring->next_tx];
>> >> >+        if (buf->is_used) {
>> >> >+                schedule_work(&sdma->tx_work);
>> >> >+                goto drop_skb;
>> >> >+        }
>> >> 
>> >> What is preventing 2 CPUs to get here and work with the same buf?
>> >
>> >I assume you mean serialization between the recycling work and xmit
>> >context ? Actually they are just updating 'is_used' field which
>> 
>> No.
>> 
>> >allows to use or free, what I can see is that may be I need to use
>> >something like READ_ONCE/WRITE_ONCE, but the rest looks safe for me:
>> >
>> >1) recycler updates is_used=false only after fully freeing the buffer,
>> >and only if it was set to true.
>> >
>> >2) xmit context gets next buffer to use only if it is freed
>> >(is_used=false), and sets it to true after buffer is ready to be sent.
>> >
>> >So, yes these contexts both update this field but in strict sequence.
>> >
>> >If you mean of protecting of xmit on several CPUS so, the xmit should be
>> >serialized on kernel, and the driver uses one queue which (as I
>> >underand) is bound to particular CPU.
>> 
>> How is it serialized? You get here (to prestera_sdma_xmit()) on 2 CPUs
>> with the same sdma pointer and 2 skbs.
>> 
>
>My understanding is:
>
>dev_hard_start_xmit is the entry function which is called by the
>networking layer to send skb via device (qos scheduler, pktgen, xfrm,
>core - dev_direct_xmit(), etc).
>
>All they acquire the HARD_TX_LOCK which locks particular tx queue. And
>since the driver uses one tx queue there should be no concurrent access
>inside ndo_start_xmit, right ?

Ah, correct. I didn't realize you have 1:1 mapping. Thanks for
explanation!

Reply via email to