On Friday 01 March 2013 10:56:33 Tijs Van Buggenhout wrote: > On Thursday 28 February 2013 20:06:49 Hauke Mehrtens wrote: > > Hi, > > > > I haven't tested your patch I have just some comments. In general it > > looks good to me. > > Would be nice if it could be tested for both types.. I'm pretty sure by now > it's working okay for unaligned addressing, but have only one device to test > on. > > > On 02/25/2013 10:18 PM, Tijs Van Buggenhout wrote: > > >>>> Signed-off-by: Tijs Van Buggenhout (t...@able.be) > > >>> > > >>> Your whole patch is malformed, there are extra line breaks and tabs > > >>> were eaten by your mailer. > > >> > > >> It's a cp&p from a cat'ed diff on console. Email client does a line > > >> wrap > > >> after 80 characters in plain text. Will try as attachment next time... > > > > Read this http://kerneltrap.org/Linux/Email_Clients_and_Patches to > > configure your mail client to not mangle the patches. You could also try > > it for you own, just send he patch to your self and try to apply it. > > Sorry again.. > > > > What do you think about the following patch? > > > > > > Only one change in bgmac_ring struct, a boolean named dma_unaligned. An > > > u32 > > > variable index_base is locally defined in functions where needed, > > > assigned > > > to addr lo of dma_base in case of unaligned addr, otherwise 0. > > > > > > --- a/drivers/net/ethernet/broadcom/bgmac.h 2013-02-20 > > > 12:41:03.138480108 +0100 +++ b/drivers/net/ethernet/broadcom/bgmac.h > > > 2013-02-25 21:03:29.110125298 +0100 @@ -384,6 +384,7 @@ > > > > > > u16 mmio_base; > > > struct bgmac_dma_desc *cpu_base; > > > dma_addr_t dma_base; > > > > > > + bool dma_unaligned; > > > > > > struct bgmac_slot_info slots[BGMAC_RX_RING_SLOTS]; > > > > > > }; > > > > > > --- a/drivers/net/ethernet/broadcom/bgmac.c 2013-02-25 > > > 22:10:14.744943929 +0100 +++ b/drivers/net/ethernet/broadcom/bgmac.c > > > 2013-02-25 20:59:39.697996832 +0100 @@ -108,6 +108,7 @@ > > > > > > struct net_device *net_dev = bgmac->net_dev; > > > struct bgmac_dma_desc *dma_desc; > > > struct bgmac_slot_info *slot; > > > > > > + u32 index_base = ( ring->dma_unaligned ? > > > lower_32_bits(ring->dma_base) : 0 );> > > > > > > u32 ctl0, ctl1; > > > int free_slots; > > > > > > @@ -156,7 +157,7 @@ > > > > > > if (++ring->end >= BGMAC_TX_RING_SLOTS) > > > > > > ring->end = 0; > > > > > > bgmac_write(bgmac, ring->mmio_base + BGMAC_DMA_TX_INDEX, > > > > > > - ring->end * sizeof(struct bgmac_dma_desc)); > > > + ( index_base + ( ring->end * sizeof(struct > > > bgmac_dma_desc) ) ));> > > > > > > /* Always keep one slot free to allow detecting bugged calls. */ > > > if (--free_slots == 1) > > > > > > @@ -174,14 +175,25 @@ > > > > > > static void bgmac_dma_tx_free(struct bgmac *bgmac, struct > > > bgmac_dma_ring > > > *ring) { > > > > > > struct device *dma_dev = bgmac->core->dma_dev; > > > > > > - int empty_slot; > > > + u16 empty_slot; > > > + u32 index_base = ( ring->dma_unaligned ? > > > lower_32_bits(ring->dma_base) : 0 );> > > > > > > bool freed = false; > > > > > > + if (ring->start == ring->end) > > > + return; > > > + > > > > Is this an unrelated change? > > Not entirely related.. would be an optimisation, but when I come to think of > it, it's probably more like a rare case. I'll have a look whether the > hardware can deal with this case..
This is not needed .. hardware can deal with it. > > > /* The last slot that hardware didn't consume yet */ > > > > > > - empty_slot = bgmac_read(bgmac, ring->mmio_base + > > > BGMAC_DMA_TX_STATUS); - empty_slot &= BGMAC_DMA_TX_STATDPTR; > > > + empty_slot = (u16) ( ( bgmac_read(bgmac, ring->mmio_base + > > > BGMAC_DMA_TX_STATUS) - + index_base ) & > > > BGMAC_DMA_TX_STATDPTR ); > > > > I like the following more: > > empty_slot = (u16)(bgmac_read(bgmac, ring->mmio_base + > > > > BGMAC_DMA_TX_STATUS) - index_base); > > > > empty_slot &= BGMAC_DMA_TX_STATDPTR; > > Used this one. > > > empty_slot /= sizeof(struct bgmac_dma_desc); > > > > > > + if (((ring->start == 0) && (empty_slot > ring->end)) || > > > + (empty_slot >= ring->num_slots)) { > > > + bgmac_err(bgmac, "Bogus current TX slot index %u (start > > > index: %u, end index: %u)\n", + empty_slot, > > > ring->start, ring->end); > > > + return; > > > + } > > > + > > > > Is this an unrelated change? > > This is related if you consider the > > bgmac bcma0:1: Hardware reported transmission for empty TX ring slot 0! End > of ring: 0 > ams > messages. > They only appeared because the while loop (see underneath) would never end, > because the condition (ring->start != empty_slot) would always be true when > empty_slot >= ring->num_slots. The body of the while loop doesn't allow > ring- > >start to become any higher than ring->num_slots, resetting it to zero when > >it > does, turning it into an infinite loop. > You can see this as a protection for the software driver in case the > hardware fails (and returns a faulty slot id (== not within range)). In > that regard I've been thinking to set a BUG_ON instead of the error > report.. in case of unaligned addressing, the hardware could not recover > from this (without being reset/reinitialised). > > BUG_ON(empty_slot >= ring->num_slots); Replaced by BUG_ON > > > while (ring->start != empty_slot) { > > > > > > struct bgmac_slot_info *slot = > > > &ring->slots[ring->start]; > > > > > > @@ -195,7 +207,7 @@ > > > > > > dev_kfree_skb(slot->skb); > > > slot->skb = NULL; > > > > > > } else { > > > > > > - bgmac_err(bgmac, "Hardware reported transmission > > > for empty TX ring slot %d! End of ring: %d\n", + > > > bgmac_err(bgmac, "Hardware reported transmission for empty TX ring slot > > > %u! End of ring: %u\n",> > > > > > > ring->start, ring->end); > > > > > > } > > > > > > @@ -270,11 +282,12 @@ > > > > > > static int bgmac_dma_rx_read(struct bgmac *bgmac, struct bgmac_dma_ring > > > *ring,> > > > > > > int weight) > > > > > > { > > > > > > - u32 end_slot; > > > + u16 end_slot; > > > + u32 index_base = ( ring->dma_unaligned ? > > > lower_32_bits(ring->dma_base) : 0 );> > > > > > > int handled = 0; > > > > > > - end_slot = bgmac_read(bgmac, ring->mmio_base + > > > BGMAC_DMA_RX_STATUS); - end_slot &= BGMAC_DMA_RX_STATDPTR; > > > + end_slot = (u16) ( ( bgmac_read(bgmac, ring->mmio_base + > > > BGMAC_DMA_RX_STATUS) - + index_base ) & > > > BGMAC_DMA_RX_STATDPTR ); > > > > > > end_slot /= sizeof(struct bgmac_dma_desc); > > > > see above Used this one. > > > ring->end = end_slot; > > > > > > @@ -298,7 +311,7 @@ > > > > > > /* Check for poison and drop or pass the packet */ > > > if (len == 0xdead && flags == 0xbeef) { > > > > > > - bgmac_err(bgmac, "Found poisoned packet at slot > > > %d, DMA issue!\n", + bgmac_err(bgmac, "Found > > > poisoned packet at slot %u, DMA issue!\n",> > > > > > > ring->start); > > > > > > } else { > > > > > > new_skb = netdev_alloc_skb(bgmac->net_dev, len + > > > 2); > > > > > > @@ -416,8 +429,11 @@ > > > > > > ring = &bgmac->tx_ring[i]; > > > ring->num_slots = BGMAC_TX_RING_SLOTS; > > > ring->mmio_base = ring_base[i]; > > > > > > - if (bgmac_dma_unaligned(bgmac, ring, BGMAC_DMA_RING_TX)) > > > - bgmac_warn(bgmac, "TX on ring 0x%X supports > > > unaligned addressing but this feature is not implemented\n", + > > > > > > if ((ring->dma_unaligned = bgmac_dma_unaligned(bgmac, ring, > > > > > > BGMAC_DMA_RING_TX))) > > > > Do not do an assignment in an if condition. Assign before the condition. > > > + bgmac_info(bgmac, "TX on ring 0x%X supports > > > unaligned DMA addressing\n", + > > > ring->mmio_base); > > > + else > > > + bgmac_info(bgmac, "TX on ring 0x%X supports > > > aligned DMA addressing\n",> > > > > > > ring->mmio_base); > > > > Shouldn't this be debug? > > As you wish ;-) Turned out to be bgmac_dbg. > > > /* Alloc ring of descriptors */ > > > > > > @@ -440,8 +456,11 @@ > > > > > > ring = &bgmac->rx_ring[i]; > > > ring->num_slots = BGMAC_RX_RING_SLOTS; > > > ring->mmio_base = ring_base[i]; > > > > > > - if (bgmac_dma_unaligned(bgmac, ring, BGMAC_DMA_RING_RX)) > > > - bgmac_warn(bgmac, "RX on ring 0x%X supports > > > unaligned addressing but this feature is not implemented\n", + > > > > > > if ((ring->dma_unaligned = bgmac_dma_unaligned(bgmac, ring, > > > > > > BGMAC_DMA_RING_RX))) > > > > see above > > > > > + bgmac_warn(bgmac, "RX on ring 0x%X supports > > > unaligned DMA addressing\n", + > > > ring->mmio_base); > > > + else > > > + bgmac_warn(bgmac, "RX on ring 0x%X supports > > > aligned DMA addressing\n",> > > > > > > ring->mmio_base); > > > > see above Assigned before the condition. > > > /* Alloc ring of descriptors */ > > > > > > @@ -485,28 +504,43 @@ > > > > > > for (i = 0; i < BGMAC_MAX_TX_RINGS; i++) { > > > > > > ring = &bgmac->tx_ring[i]; > > > > > > - /* We don't implement unaligned addressing, so enable > > > first */ - bgmac_dma_tx_enable(bgmac, ring); > > > + if (!ring->dma_unaligned) > > > + /* When addressing is aligned, enable first */ > > > + bgmac_dma_tx_enable(bgmac, ring); > > > + > > > > > > bgmac_write(bgmac, ring->mmio_base + > > > BGMAC_DMA_TX_RINGLO, > > > > > > lower_32_bits(ring->dma_base)); > > > > > > bgmac_write(bgmac, ring->mmio_base + > > > BGMAC_DMA_TX_RINGHI, > > > > > > upper_32_bits(ring->dma_base)); > > > > > > + if (ring->dma_unaligned) > > > + /* Enable ring after initialising DMA table */ > > > + bgmac_dma_tx_enable(bgmac, ring); > > > + > > > > > > ring->start = 0; > > > ring->end = 0; /* Points the slot that should *not* be > > > read */ > > > > > > } > > > > > > for (i = 0; i < BGMAC_MAX_RX_RINGS; i++) { > > > > > > int j; > > > > > > + u32 index_base; > > > + > > > > > > ring = &bgmac->rx_ring[i]; > > > > > > + index_base = ( ring->dma_unaligned ? > > > lower_32_bits(ring->dma_base) : 0 ); + > > > + if (!ring->dma_unaligned) > > > + /* When addressing is aligned, enable first */ > > > + bgmac_dma_rx_enable(bgmac, ring); > > > > > > - /* We don't implement unaligned addressing, so enable > > > first */ - bgmac_dma_rx_enable(bgmac, ring); > > > > > > bgmac_write(bgmac, ring->mmio_base + > > > BGMAC_DMA_RX_RINGLO, > > > > > > lower_32_bits(ring->dma_base)); > > > > > > bgmac_write(bgmac, ring->mmio_base + > > > BGMAC_DMA_RX_RINGHI, > > > > > > upper_32_bits(ring->dma_base)); > > > > > > + if (ring->dma_unaligned) > > > + /* Enable ring after initialising DMA table */ > > > + bgmac_dma_rx_enable(bgmac, ring); > > > + > > > > > > for (j = 0, dma_desc = ring->cpu_base; j < > > > ring->num_slots; > > > > > > j++, dma_desc++) { > > > > > > ctl0 = ctl1 = 0; > > > > > > @@ -526,7 +560,7 @@ > > > > > > } > > > > > > bgmac_write(bgmac, ring->mmio_base + BGMAC_DMA_RX_INDEX, > > > > > > - ring->num_slots * sizeof(struct > > > bgmac_dma_desc)); + ( index_base + ( > > > ring->num_slots * sizeof(struct bgmac_dma_desc) ) ));> > > > > > > ring->start = 0; > > > ring->end = 0; > > > > > > Regards, > > > Tijs > > Thanks a lot. > Regards, > Tijs > _______________________________________________ > openwrt-devel mailing list > openwrt-devel@lists.openwrt.org > https://lists.openwrt.org/mailman/listinfo/openwrt-devel Here's the new patch drawn from git trunk --- diff --git a/drivers/net/ethernet/broadcom/bgmac.c b/drivers/net/ethernet/broadcom/bgmac.c index bf985c0..45305bc 100644 --- a/drivers/net/ethernet/broadcom/bgmac.c +++ b/drivers/net/ethernet/broadcom/bgmac.c @@ -108,6 +108,7 @@ static netdev_tx_t bgmac_dma_tx_add(struct bgmac *bgmac, struct net_device *net_dev = bgmac->net_dev; struct bgmac_dma_desc *dma_desc; struct bgmac_slot_info *slot; + u32 index_base = ( ring->dma_unaligned ? lower_32_bits(ring->dma_base) : 0 ); u32 ctl0, ctl1; int free_slots; @@ -156,7 +157,7 @@ static netdev_tx_t bgmac_dma_tx_add(struct bgmac *bgmac, if (++ring->end >= BGMAC_TX_RING_SLOTS) ring->end = 0; bgmac_write(bgmac, ring->mmio_base + BGMAC_DMA_TX_INDEX, - ring->end * sizeof(struct bgmac_dma_desc)); + ( index_base + ( ring->end * sizeof(struct bgmac_dma_desc) ) )); /* Always keep one slot free to allow detecting bugged calls. */ if (--free_slots == 1) @@ -174,14 +175,17 @@ err_stop_drop: static void bgmac_dma_tx_free(struct bgmac *bgmac, struct bgmac_dma_ring *ring) { struct device *dma_dev = bgmac->core->dma_dev; - int empty_slot; + u16 empty_slot; + u32 index_base = ( ring->dma_unaligned ? lower_32_bits(ring->dma_base) : 0 ); bool freed = false; /* The last slot that hardware didn't consume yet */ - empty_slot = bgmac_read(bgmac, ring->mmio_base + BGMAC_DMA_TX_STATUS); - empty_slot &= BGMAC_DMA_TX_STATDPTR; + empty_slot = (u16) ( ( bgmac_read(bgmac, ring->mmio_base + BGMAC_DMA_TX_STATUS) - + index_base ) & BGMAC_DMA_TX_STATDPTR ); empty_slot /= sizeof(struct bgmac_dma_desc); + BUG_ON(empty_slot >= ring->num_slots); + while (ring->start != empty_slot) { struct bgmac_slot_info *slot = &ring->slots[ring->start]; @@ -195,7 +199,7 @@ static void bgmac_dma_tx_free(struct bgmac *bgmac, struct bgmac_dma_ring *ring) dev_kfree_skb(slot->skb); slot->skb = NULL; } else { - bgmac_err(bgmac, "Hardware reported transmission for empty TX ring slot %d! End of ring: %d\n", + bgmac_err(bgmac, "Hardware reported transmission for empty TX ring slot %u! End of ring: %u\n", ring->start, ring->end); } @@ -270,11 +274,12 @@ static int bgmac_dma_rx_skb_for_slot(struct bgmac *bgmac, static int bgmac_dma_rx_read(struct bgmac *bgmac, struct bgmac_dma_ring *ring, int weight) { - u32 end_slot; + u16 end_slot; + u32 index_base = ( ring->dma_unaligned ? lower_32_bits(ring->dma_base) : 0 ); int handled = 0; - end_slot = bgmac_read(bgmac, ring->mmio_base + BGMAC_DMA_RX_STATUS); - end_slot &= BGMAC_DMA_RX_STATDPTR; + end_slot = (u16) ( ( bgmac_read(bgmac, ring->mmio_base + BGMAC_DMA_RX_STATUS) - + index_base ) & BGMAC_DMA_RX_STATDPTR ); end_slot /= sizeof(struct bgmac_dma_desc); ring->end = end_slot; @@ -298,7 +303,7 @@ static int bgmac_dma_rx_read(struct bgmac *bgmac, struct bgmac_dma_ring *ring, /* Check for poison and drop or pass the packet */ if (len == 0xdead && flags == 0xbeef) { - bgmac_err(bgmac, "Found poisoned packet at slot %d, DMA issue!\n", + bgmac_err(bgmac, "Found poisoned packet at slot %u, DMA issue!\n", ring->start); } else { new_skb = netdev_alloc_skb_ip_align(bgmac->net_dev, len); @@ -415,9 +420,13 @@ static int bgmac_dma_alloc(struct bgmac *bgmac) ring = &bgmac->tx_ring[i]; ring->num_slots = BGMAC_TX_RING_SLOTS; ring->mmio_base = ring_base[i]; - if (bgmac_dma_unaligned(bgmac, ring, BGMAC_DMA_RING_TX)) - bgmac_warn(bgmac, "TX on ring 0x%X supports unaligned addressing but this feature is not implemented\n", - ring->mmio_base); + ring->dma_unaligned = bgmac_dma_unaligned(bgmac, ring, BGMAC_DMA_RING_TX); + if (ring->dma_unaligned) + bgmac_dbg(bgmac, "TX on ring 0x%X supports unaligned DMA addressing\n", + ring->mmio_base); + else + bgmac_dbg(bgmac, "TX on ring 0x%X supports aligned DMA addressing\n", + ring->mmio_base); /* Alloc ring of descriptors */ size = ring->num_slots * sizeof(struct bgmac_dma_desc); @@ -441,9 +450,13 @@ static int bgmac_dma_alloc(struct bgmac *bgmac) ring = &bgmac->rx_ring[i]; ring->num_slots = BGMAC_RX_RING_SLOTS; ring->mmio_base = ring_base[i]; - if (bgmac_dma_unaligned(bgmac, ring, BGMAC_DMA_RING_RX)) - bgmac_warn(bgmac, "RX on ring 0x%X supports unaligned addressing but this feature is not implemented\n", - ring->mmio_base); + ring->dma_unaligned = bgmac_dma_unaligned(bgmac, ring, BGMAC_DMA_RING_RX); + if (ring->dma_unaligned) + bgmac_dbg(bgmac, "RX on ring 0x%X supports unaligned DMA addressing\n", + ring->mmio_base); + else + bgmac_dbg(bgmac, "RX on ring 0x%X supports aligned DMA addressing\n", + ring->mmio_base); /* Alloc ring of descriptors */ size = ring->num_slots * sizeof(struct bgmac_dma_desc); @@ -486,29 +499,43 @@ static void bgmac_dma_init(struct bgmac *bgmac) for (i = 0; i < BGMAC_MAX_TX_RINGS; i++) { ring = &bgmac->tx_ring[i]; - /* We don't implement unaligned addressing, so enable first */ - bgmac_dma_tx_enable(bgmac, ring); + if (!ring->dma_unaligned) + /* When addressing is aligned, enable first */ + bgmac_dma_tx_enable(bgmac, ring); + bgmac_write(bgmac, ring->mmio_base + BGMAC_DMA_TX_RINGLO, lower_32_bits(ring->dma_base)); bgmac_write(bgmac, ring->mmio_base + BGMAC_DMA_TX_RINGHI, upper_32_bits(ring->dma_base)); + if (ring->dma_unaligned) + /* Enable ring after initialising DMA table */ + bgmac_dma_tx_enable(bgmac, ring); + ring->start = 0; ring->end = 0; /* Points the slot that should *not* be read */ } for (i = 0; i < BGMAC_MAX_RX_RINGS; i++) { int j; + u32 index_base; ring = &bgmac->rx_ring[i]; + index_base = ( ring->dma_unaligned ? lower_32_bits(ring->dma_base) : 0 ); + + if (!ring->dma_unaligned) + /* When addressing is aligned, enable first */ + bgmac_dma_rx_enable(bgmac, ring); - /* We don't implement unaligned addressing, so enable first */ - bgmac_dma_rx_enable(bgmac, ring); bgmac_write(bgmac, ring->mmio_base + BGMAC_DMA_RX_RINGLO, lower_32_bits(ring->dma_base)); bgmac_write(bgmac, ring->mmio_base + BGMAC_DMA_RX_RINGHI, upper_32_bits(ring->dma_base)); + if (ring->dma_unaligned) + /* Enable ring after initialising DMA table */ + bgmac_dma_rx_enable(bgmac, ring); + for (j = 0, dma_desc = ring->cpu_base; j < ring->num_slots; j++, dma_desc++) { ctl0 = ctl1 = 0; @@ -528,7 +555,7 @@ static void bgmac_dma_init(struct bgmac *bgmac) } bgmac_write(bgmac, ring->mmio_base + BGMAC_DMA_RX_INDEX, - ring->num_slots * sizeof(struct bgmac_dma_desc)); + ( index_base + ( ring->num_slots * sizeof(struct bgmac_dma_desc) ) )); ring->start = 0; ring->end = 0; diff --git a/drivers/net/ethernet/broadcom/bgmac.h b/drivers/net/ethernet/broadcom/bgmac.h index 4ede614..4fd9793 100644 --- a/drivers/net/ethernet/broadcom/bgmac.h +++ b/drivers/net/ethernet/broadcom/bgmac.h @@ -384,6 +384,7 @@ struct bgmac_dma_ring { u16 mmio_base; struct bgmac_dma_desc *cpu_base; dma_addr_t dma_base; + bool dma_unaligned; struct bgmac_slot_info slots[BGMAC_RX_RING_SLOTS]; }; _______________________________________________ openwrt-devel mailing list openwrt-devel@lists.openwrt.org https://lists.openwrt.org/mailman/listinfo/openwrt-devel