Stop using RTS for every data frame sent by iwm(4).
RTS adds unneccessary overhead if small data frames are sent.
The USE_RTS flag in iwm's LQ command enables RTS unconditionally, so only
set it while the AP is enforcing protection. The flag will be kept up-to-date
as a side effect of iwm_setrates(), which is called when the Tx rate changes.
RTS is still used for long frames since the Tx command takes care of that.
(iwm firmware exposes 3 different flags which enable RTS... don't ask.)
ok?
Index: if_iwm.c
===================================================================
RCS file: /cvs/src/sys/dev/pci/if_iwm.c,v
retrieving revision 1.143
diff -u -p -r1.143 if_iwm.c
--- if_iwm.c 5 Oct 2016 18:13:25 -0000 1.143
+++ if_iwm.c 6 Oct 2016 13:52:08 -0000
@@ -5286,7 +5286,9 @@ iwm_setrates(struct iwm_node *in)
memset(lq, 0, sizeof(*lq));
lq->sta_id = IWM_STATION_ID;
- lq->flags = IWM_LQ_FLAG_USE_RTS_MSK;
+
+ if (ic->ic_flags & IEEE80211_F_USEPROT)
+ lq->flags |= IWM_LQ_FLAG_USE_RTS_MSK;
sgi_ok = ((ni->ni_flags & IEEE80211_NODE_HT) &&
(ni->ni_htcaps & IEEE80211_HTCAP_SGI20));