> I thought that an mmiowb() was called for here (to order the PIO 
> writes above more cheaply than doing the readq()). I posted a 
> patch like this some time ago:
> 
> http://marc.theaimsgroup.com/?l=linux-netdev&m=111508292028110&w=2

On an Altix machine I believe the readq was necessary to flush 
the PIO writes. How long did you run the tests? I had seen
in long duration tests that an occasional write 
(TXDL control word and the address) would be missed and the xmit
Get's stuck.


> 
> FWIW, I've done quite a few performance measurements with the patch 
> I posted earlier, and it's worked well. For 1500 byte mtus throughput 
> goes up by ~20%. Is even the mmiowb() unnecessary?
> 

Was this on 2.4 kernel because I think the readq would not have a 
significant impact on 2.6 kernels due to TSO.
(with TSO on the number of packets that actually enter the 
Xmit routine would be reduced apprx 40 times).

> What is the wmb() above for?
wmb() is to ensure ordered PIO writes.

Thanks
- Koushik

 

> -----Original Message-----
> From: Arthur Kepner [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, July 07, 2005 4:15 PM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
> netdev@vger.kernel.org; [EMAIL PROTECTED]; 
> [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: [PATCH 2.6.12.1 5/12] S2io: Performance improvements
> 
> 
> On Thu, 7 Jul 2005 [EMAIL PROTECTED] wrote:
> 
> > .......
> > 2. Removed unnecessary PIOs(read/write of tx_traffic_int and 
> >    rx_traffic_int) from interrupt handler and removed read of
> >    general_int_status register from xmit routine.           
>           
> > ......
> > @@ -2891,6 +2869,8 @@ int s2io_xmit(struct sk_buff *skb, struc
> >     val64 = 
> mac_control->fifos[queue].list_info[put_off].list_phy_addr;
> >     writeq(val64, &tx_fifo->TxDL_Pointer);
> >  
> > +   wmb();
> > +
> >     val64 = (TX_FIFO_LAST_TXD_NUM(frg_cnt) | TX_FIFO_FIRST_LIST |
> >              TX_FIFO_LAST_LIST);
> >  
> > @@ -2900,9 +2880,6 @@ int s2io_xmit(struct sk_buff *skb, struc
> >  #endif
> >     writeq(val64, &tx_fifo->List_Control);
> >  
> > -   /* Perform a PCI read to flush previous writes */
> > -   val64 = readq(&bar0->general_int_status);
> > -
> >     put_off++;
>                                                                   
> I thought that an mmiowb() was called for here (to order the PIO 
> writes above more cheaply than doing the readq()). I posted a 
> patch like this some time ago:
> 
> http://marc.theaimsgroup.com/?l=linux-netdev&m=111508292028110&w=2
> 
> FWIW, I've done quite a few performance measurements with the patch 
> I posted earlier, and it's worked well. For 1500 byte mtus throughput 
> goes up by ~20%. Is even the mmiowb() unnecessary?
> 
> What is the wmb() above for?
> 
> --
> Arthur
> 

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to