Just following up on this because I’m working on adding support to use this on 
OpenWrt. I am not sure what the issue is but perhaps it could be an issue with 
the MM netifd script on OpenWrt? I passed in the multiplex=$value to the 
connectargs then brought up the interface. Everything looks normal except it 
that it seems traffic is transmitted over the interface but nothing is ever 
received. I have tried removing the extra QMIMUX interfaces (del_mux 4,3,2) to 
make it close to the same configuration as when setting up with qmicli, however 
no joy.  Is anyone able to replicate this?


Best,
Nick

> On 13 Aug 2022, at 17:48, Nick <mips...@icloud.com> wrote:
> 
> I have tried using ModemManager now with `multiplex=required` in the bearer 
> options and it connects with 4 QMUX interfaces, netifd assigns the IP address 
> to the qmimux0 interface, I have a default route via that interface, 
> everything looks good but... I can’t ping the internet.  It looks like wwan0 
> is now the qmux main interface, with a large MTU (31744).  Is there some 
> extra step required to use this?  What could I be missing?
> 
> Best,
> Nick
> 
>> On 13 Aug 2022, at 15:19, Nick <mips...@icloud.com 
>> <mailto:mips...@icloud.com>> wrote:
>> 
>> I did some testing with this advice and have some results and a couple more 
>> questions.
>> 
>> Bjørn, I realise now you were talking about cdc-mbim.  I checked the values 
>> for tx_max and rx_max which are both 16384 by default on my device.  I am 
>> able to change rx_max to 31744, which seems to improve upload slightly, but 
>> I cannot change tx_max (Permission denied, and after changing file 
>> permissions I just get an I/O error).  Is that value supposed to be user 
>> accessible? Is this value tied to dwNtbOutMaxSize? Using cdc-mbim with these 
>> settings I get consistently 200Mbps, so my feeling is the bottleneck could 
>> be tied to these values, since I’m able to change their counterparts in the 
>> QMI driver.  Using QMI QMAP it gets much faster than before, about 450Mbps.
>> 
>> Another question more ModemManager related; is there a way to set up a 
>> connection using user-specified QMAP values like the ones Sebastian 
>> provided?  
>> >qmicli -p -d /dev/cdc-wdm0 --client-cid=1 
>> >--wda-set-data-format="link-layer-protocol=raw-ip,ul-protocol=qmap,dl-protocol=qmap,dl-max-datagrams=32,dl-datagram-max-size=32768,ep-type=hsusb,ep-iface-number=4"
>> > --client-no-release-cid
>> 
>> Best, 
>> Nick
>> 
>>> On 11 Aug 2022, at 17:47, Bjørn Mork <bj...@mork.no <mailto:bj...@mork.no>> 
>>> wrote:
>>> 
>>> Nick <mips...@icloud.com <mailto:mips...@icloud.com>> writes:
>>> 
>>>> Hey,
>>>> 
>>>> I am testing a Quectel RM500Q on OpenWrt master, and have noticed to
>>>> my surprise that the speed is much slower when using the qmi_wwan with
>>>> MM than it is when using qmi_wwan_q and quectel-CM (Quectel’s
>>>> proprietary driver and connection manager).
>>> 
>>> This is sort of expected since the qmi_wwan driver will use one USB
>>> transaction per IP packet whereas the qmi_wwan_q will buffer a number of
>>> packets per transaction.
>>> 
>>> There is some built-in support for MAP (RMNET muxing, which implies
>>> buffering) in qmi_wwan.  But I recommend using the more recent rmnet
>>> driver for that, with qmi_wwan in pass-throuh mode.  This is supported
>>> by recent ModemManager/libqmi.  Ref
>>> https://gitlab.freedesktop.org/mobile-broadband/ModemManager/-/merge_requests/447
>>>  
>>> <https://gitlab.freedesktop.org/mobile-broadband/ModemManager/-/merge_requests/447>
>>> 
>>>> Under good signal conditions the speed tops out at around 100Mbps on
>>>> qmi_wwan + MM (and is a little bit faster when in MBIM mode with MM),
>>>> but switching to qmi_wwan_q and quectel_CM it gets the expected
>>>> 700Mbps+ where I am. Is there an easy explanation for this? Any
>>>> suggestions as to what I can change to get speeds equivalent to the
>>>> proprietary stack?
>>> 
>>> I'm a little surprised that you don't get better numbers in MBIM mode.
>>> It should have the same advantages as qmi_wwan_q or qmi_wwan+rmnet. I
>>> must admit that I haven't done any seriuos testing of this theory myself
>>> though.  But "A little bit faster than 100Mbps" is unexpectedly slow.
>>> I'm pretty sure we can do much better than that in MBIM mode.
>>> 
>>> What kind of hardware is the host running?  Maybe we have some alignment
>>> issue punishing this hardware?  Or maybe the buffers we use are
>>> sub-optimal for thise host+device combo?  You could try to adjust some
>>> of the writable settings in /sys/class/net/wwan0/cdc_ncm/ (replace wwan0
>>> with your interface name)
>>> 
>>> 
>>> 
>>> Bjørn
>> 
> 

Reply via email to