Hi Stuart,

Seeing as it seems like everyone is too busy, and my workaround (not queue some flows on interfaces with queue defined) seems of no interest, and my current hack to use queuing on Vlan interfaces is a very incomplete and restrictive workaround;
Would you please be so kind as to provide me with a starting point in the source code and variable names to concentrate on, where I can start tracing from beginning to end for changing the scale from bits to bytes?

Thanks :)
Andy



On 14 Sep 2023, at 19:34, Andrew Lemin <andrew.le...@gmail.com> wrote:




On Thu, Sep 14, 2023 at 7:23 PM Andrew Lemin <andrew.le...@gmail.com> wrote:


On Wed, Sep 13, 2023 at 8:35 PM Stuart Henderson <stu.li...@spacehopper.org> wrote:
On 2023-09-13, Andrew Lemin <andrew.le...@gmail.com> wrote:
> I have noticed another issue while trying to implement a 'prio'-only
> workaround (using only prio ordering for inter-VLAN traffic, and HSFC
> queuing for internet traffic);
> It is not possible to have internal inter-vlan traffic be solely priority
> ordered with 'set prio', as the existence of 'queue' definitions on the
> same internal vlan interfaces (required for internet flows), demands one
> leaf queue be set as 'default'. Thus forcing all inter-vlan traffic into
> the 'default' queue despite queuing not being wanted, and so
> unintentionally clamping all internal traffic to 4294M just because full
> queuing is needed for internet traffic.

If you enable queueing on an interface all traffic sent via that
interface goes via one queue or another.

Yes, that is indeed the very problem. Queueing is enabled on the inside interfaces, with bandwidth values set slightly below the ISP capacities (multiple ISP links as well), so that all things work well for all internal users.
However this means that inter-vlan traffic from client networks to server networks are restricted to 4294Mbps for no reason.. It would make a huge difference to be able to allow local traffic to flow without being queued/restircted.
 

(also, AIUI the correct place for queues is on the physical interface
not the vlan, since that's where the bottleneck is... you can assign
traffic to a queue name as it comes in on the vlan but I believe the
actual queue definition should be on the physical iface).

Hehe yes I know. Thanks for sharing though.
I actually have very specific reasons for doing this (queues on the VLAN ifaces rather than phy) as there are multiple ISP connections for multiple VLANs, so the VLAN queues are set to restrict for the relevant ISP link etc.

Also separate to the multiple ISPs (I wont bore you with why as it is not relevant here), the other reason for queueing on the VLANs is because it allows you to get closer to the 10Gbps figure..
Ie, If you have queues on the 10Gbps PHY, you can only egress 4294Mbps to _all_ VLANs. But if you have queues per-VLAN iface, you can egress multiple times 4294Mbps on aggregate.
Eg, vlans 10,11,12,13 on single mcx0 trunk. 10->11 can do 4294Mbps and 12->13 can do 4294Mbps, giving over 8Gbps egress in total on the PHY. It is dirty, but like I said, desperate for workarounds... :(
 
 

"required for internet flows" - depends on your network layout.. the
upstream feed doesn't have to go via the same interface as inter-vlan
traffic.

I'm not sure what you mean. All the internal networks/vlans are connected to local switches, and the switches have trunk to the firewall which hosts the default gateway for the VLANs and does inter-vlan routing.
So all the clients go through the same VLANs/trunk/gateway for inter-vlan as they do for internet. Strict L3/4 filtering is required on inter-vlan traffic.
I am honestly looking for support to recognise that this is a correct, valid and common setup, and so there is a genuine need to allow flows to not be queued on interfaces that have queues (which has many potential applications for many use cases, not just mine - so should be of interest to the developers?).

Do you know why there has to be a default queue? Yes I know that traffic excluded from queues would take from the same interface the queueing is trying to manage, and potentially causes congestion. However with 10Gbps networking which is beyond common now, this does not matter when the queues are stuck at 4294Mbps

Desperately trying to find workarounds that appeal.. Surely the need is a no brainer, and it is just a case of trying to encourage interest from a developer?

Thanks :)

Reply via email to