On 7/19/2018 4:49 AM, Jakub Kicinski wrote:
On Wed, 18 Jul 2018 18:01:01 -0700, Saeed Mahameed wrote:
+static const struct devlink_param mlx5_devlink_params[] = {
+       DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_CONGESTION_ACTION,
+                            "congestion_action",
+                            DEVLINK_PARAM_TYPE_U8,
+                            BIT(DEVLINK_PARAM_CMODE_RUNTIME),
+                            mlx5_devlink_get_congestion_action,
+                            mlx5_devlink_set_congestion_action, NULL),
+       DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_CONGESTION_MODE,
+                            "congestion_mode",
+                            DEVLINK_PARAM_TYPE_U8,
+                            BIT(DEVLINK_PARAM_CMODE_RUNTIME),
+                            mlx5_devlink_get_congestion_mode,
+                            mlx5_devlink_set_congestion_mode, NULL),
+};

The devlink params haven't been upstream even for a full cycle and
already you guys are starting to use them to configure standard
features like queuing.

We developed the devlink params in order to support non-standard configuration only. And for non-standard, there are generic and vendor specific options. The queuing model is a standard. However here we are configuring the outbound PCIe buffers on the receive path from NIC port toward the host(s) in Single / MultiHost environment. (You can see the driver processing based on this param as part of the RX patch for the marked option here https://patchwork.ozlabs.org/patch/945998/)


I know your HW is not capable of doing full RED offload, it's a
snowflake.

The algorithm which is applied here for the drop option is not the core of this feature.

You tell us you're doing custom DCB configuration hacks on
one side (previous argument we had) and custom devlink parameter
configuration hacks on PCIe.

Perhaps the idea that we're trying to use the existing Linux APIs for
HW configuration only applies to forwarding behaviour.

Hopefully I explained above well why it is not related.

Reply via email to