The skb->queue_mapping already have read access, via __sk_buff->queue_mapping.

This patch allow BPF tc qdisc clsact write access to the queue_mapping via
tc_cls_act_is_valid_access.  Also handle that the value NO_QUEUE_MAPPING
is not allowed.

It is already possible to change this via TC filter action skbedit
tc-skbedit(8).  Due to the lack of TC examples, lets show one:

  # tc qdisc  add  dev ixgbe1 clsact
  # tc filter add  dev ixgbe1 ingress matchall action skbedit queue_mapping 5
  # tc filter list dev ixgbe1 ingress

The most common mistake is that XPS (Transmit Packet Steering) takes
precedence over setting skb->queue_mapping. XPS is configured per DEVICE
via /sys/class/net/DEVICE/queues/tx-*/xps_cpus via a CPU hex mask. To
disable set mask=00.

The purpose of changing skb->queue_mapping is to influence the selection of
the net_device "txq" (struct netdev_queue), which influence selection of
the qdisc "root_lock" (via txq->qdisc->q.lock) and txq->_xmit_lock. When
using the MQ qdisc the txq->qdisc points to different qdiscs and associated
locks, and HARD_TX_LOCK (txq->_xmit_lock), allowing for CPU scalability.

Due to lack of TC examples, lets show howto attach clsact BPF programs:

 # tc qdisc  add  dev ixgbe2 clsact
 # tc filter add  dev ixgbe2 egress bpf da obj XXX_kern.o sec tc_qmap2cpu
 # tc filter list dev ixgbe2 egress

Signed-off-by: Jesper Dangaard Brouer <bro...@redhat.com>
---
 net/core/filter.c |   16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/net/core/filter.c b/net/core/filter.c
index 353735575204..8b80cdf96595 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -6238,6 +6238,7 @@ static bool tc_cls_act_is_valid_access(int off, int size,
                case bpf_ctx_range(struct __sk_buff, tc_classid):
                case bpf_ctx_range_till(struct __sk_buff, cb[0], cb[4]):
                case bpf_ctx_range(struct __sk_buff, tstamp):
+               case bpf_ctx_range(struct __sk_buff, queue_mapping):
                        break;
                default:
                        return false;
@@ -6642,9 +6643,18 @@ static u32 bpf_convert_ctx_access(enum bpf_access_type 
type,
                break;
 
        case offsetof(struct __sk_buff, queue_mapping):
-               *insn++ = BPF_LDX_MEM(BPF_H, si->dst_reg, si->src_reg,
-                                     bpf_target_off(struct sk_buff, 
queue_mapping, 2,
-                                                    target_size));
+               if (type == BPF_WRITE) {
+                       *insn++ = BPF_JMP_IMM(BPF_JGE, si->src_reg, 
NO_QUEUE_MAPPING, 1);
+                       *insn++ = BPF_STX_MEM(BPF_H, si->dst_reg, si->src_reg,
+                                             bpf_target_off(struct sk_buff,
+                                                            queue_mapping,
+                                                            2, target_size));
+               } else {
+                       *insn++ = BPF_LDX_MEM(BPF_H, si->dst_reg, si->src_reg,
+                                             bpf_target_off(struct sk_buff,
+                                                            queue_mapping,
+                                                            2, target_size));
+               }
                break;
 
        case offsetof(struct __sk_buff, vlan_present):

Reply via email to