Re: [Intel-wired-lan] [PATCH iwl-next v2] ice: Remove and readd netdev during devlink reload

2024-01-29 Thread Wojciech Drewek



On 27.01.2024 00:19, Brett Creeley wrote:
> 
> 
> On 1/25/2024 12:54 AM, Wojciech Drewek wrote:
>> Caution: This message originated from an External Source. Use proper caution 
>> when opening attachments, clicking links, or responding.
>>
>>
>> Recent changes to the devlink reload (commit 9b2348e2d6c9
>> ("devlink: warn about existing entities during reload-reinit"))
>> force the drivers to destroy devlink ports during reinit.
>> Adjust ice driver to this requirement, unregister netdvice, destroy
>> devlink port. ice_init_eth() was removed and all the common code
>> between probe and reload was moved to ice_load().
>>
>> During devlink reload we can't take devl_lock (it's already taken)
>> and in ice_probe() we have to lock it. Use devl_* variant of the API
>> which does not acquire and release devl_lock. Guard ice_load()
>> with devl_lock only in case of probe.
>>
>> Introduce ice_debugfs_fwlog_deinit() in order to release PF's
>> debugfs entries. Move ice_debugfs_exit() call to ice_module_exit().
>>
>> Suggested-by: Jiri Pirko 
>> Reviewed-by: Przemek Kitszel 
>> Signed-off-by: Wojciech Drewek 
>> ---
>> v2: empty init removed in ice_devlink_reinit_up
>> ---
>>   drivers/net/ethernet/intel/ice/ice.h |   3 +
>>   drivers/net/ethernet/intel/ice/ice_debugfs.c |  10 +
>>   drivers/net/ethernet/intel/ice/ice_devlink.c |  68 ++-
>>   drivers/net/ethernet/intel/ice/ice_fwlog.c   |   2 +
>>   drivers/net/ethernet/intel/ice/ice_main.c    | 189 ++-
>>   5 files changed, 139 insertions(+), 133 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/intel/ice/ice.h 
>> b/drivers/net/ethernet/intel/ice/ice.h
>> index e841f6c4f1c4..39734e5b9d56 100644
>> --- a/drivers/net/ethernet/intel/ice/ice.h
>> +++ b/drivers/net/ethernet/intel/ice/ice.h
>> @@ -897,6 +897,7 @@ static inline bool ice_is_adq_active(struct ice_pf *pf)
>>   }
>>
>>   void ice_debugfs_fwlog_init(struct ice_pf *pf);
>> +void ice_debugfs_fwlog_deinit(struct ice_pf *pf);
>>   void ice_debugfs_init(void);
>>   void ice_debugfs_exit(void);
>>   void ice_pf_fwlog_update_module(struct ice_pf *pf, int log_level, int 
>> module);
>> @@ -984,6 +985,8 @@ void ice_service_task_schedule(struct ice_pf *pf);
>>   int ice_load(struct ice_pf *pf);
>>   void ice_unload(struct ice_pf *pf);
>>   void ice_adv_lnk_speed_maps_init(void);
>> +int ice_init_dev(struct ice_pf *pf);
>> +void ice_deinit_dev(struct ice_pf *pf);
>>
>>   /**
>>    * ice_set_rdma_cap - enable RDMA support
>> diff --git a/drivers/net/ethernet/intel/ice/ice_debugfs.c 
>> b/drivers/net/ethernet/intel/ice/ice_debugfs.c
>> index c2bfba6b9ead..8fdcdfb804b3 100644
>> --- a/drivers/net/ethernet/intel/ice/ice_debugfs.c
>> +++ b/drivers/net/ethernet/intel/ice/ice_debugfs.c
>> @@ -647,6 +647,16 @@ void ice_debugfs_fwlog_init(struct ice_pf *pf)
>>  kfree(fw_modules);
>>   }
>>
>> +/**
>> + * ice_debugfs_fwlog_deinit - cleanup PF's debugfs
>> + * @pf: pointer to the PF struct
>> + */
>> +void ice_debugfs_fwlog_deinit(struct ice_pf *pf)
>> +{
>> +   debugfs_remove_recursive(pf->ice_debugfs_pf);
>> +   pf->ice_debugfs_pf = NULL;
>> +}
> 
> This function seems misleading to me. The ice_pf structure has the following 
> debugfs dentry pointers:
> 
> struct dentry *ice_debugfs_pf;
> struct dentry *ice_debugfs_pf_fwlog;
> struct dentry *ice_debugfs_pf_fwlog_modules;
> 
> The function name is ice_debugfs_fwlog_deinit(), however it seems you are 
> removing debugfs entries recursively from ice_debugfs_pf.
> 
> Maybe the function should be called:
> 
> ice_debugfs_deinit()?

ice_debugfs_pf_deinit() is even better I think since it removes debugfs entries 
per PF

> 
> Also, I know your commit didn't introduce the naming scheme for the debugfs 
> members in struct ice_pf, but it's a bit strange having "ice" or "pf" in 
> their name. It might be worth a follow up to fix these names to something 
> like the following:
> 
> struct dentry *debugfs;
> struct dentry *debugfs_fwlog;
> struct dentry *debugfs_fwlog_modules;

We will do the follow up on that.

> 
> Thanks,
> 
> Brett
>> +
>>   /**
>>    * ice_debugfs_init - create root directory for debugfs entries
>>    */
>> diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.c 
>> b/drivers/net/ethernet/intel/ice/ice_devlink.c
>> index 97182bacafa3..a428e24439d0 100644
>> --- a/drivers/net/ethernet/intel/ice/ice_devlink.c
>> +++ b/drivers/net/ethernet/intel/ice/ice_devlink.c
>> @@ -444,6 +444,20 @@ ice_devlink_reload_empr_start(struct ice_pf *pf,
>>  return 0;
>>   }
>>
>> +/**
>> + * ice_devlink_reinit_down - unload given PF
>> + * @pf: pointer to the PF struct
>> + */
>> +static void ice_devlink_reinit_down(struct ice_pf *pf)
>> +{
>> +   /* No need to take devl_lock, it's already taken by devlink API */
>> +   ice_unload(pf);
>> +   rtnl_lock();
>> +   ice_vsi_decfg(ice_get_main_vsi(pf));
>> +   rtnl_unlock();
>> +   ice_deinit_dev(pf);
>> +}
>> +
>>   /**
>>    * ice_devlink_reload_down - prepare f

Re: [Intel-wired-lan] [iwl-next v1 4/8] ice: control default Tx rule in lag

2024-01-29 Thread Simon Horman
On Thu, Jan 25, 2024 at 01:53:10PM +0100, Michal Swiatkowski wrote:
> Tx rule in switchdev was changed to use PF instead of additional control
> plane VSI. Because of that during lag we should control it. Control
> means to add and remove the default Tx rule during lag active/inactive
> switching.
> 
> It can be done the same way as default Rx rule.

Hi Michal,

Can I confirm that LAG TX/RX works both before and after this patch?

> 
> Reviewed-by: Wojciech Drewek 
> Reviewed-by: Marcin Szycik 
> Signed-off-by: Michal Swiatkowski 
> ---
>  drivers/net/ethernet/intel/ice/ice_lag.c | 39 ++--
>  drivers/net/ethernet/intel/ice/ice_lag.h |  3 +-
>  2 files changed, 32 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c 
> b/drivers/net/ethernet/intel/ice/ice_lag.c

...

> @@ -266,9 +274,22 @@ ice_lag_cfg_dflt_fltr(struct ice_lag *lag, bool add)
>  {
>   u32 act = ICE_SINGLE_ACT_VSI_FORWARDING |
>   ICE_SINGLE_ACT_VALID_BIT | ICE_SINGLE_ACT_LAN_ENABLE;
> + int err;
> +
> + err = ice_lag_cfg_fltr(lag, act, lag->pf_recipe, &lag->pf_rx_rule_id,
> +ICE_FLTR_RX, add);
> + if (err)
> + return err;
> +
> + err = ice_lag_cfg_fltr(lag, act, lag->pf_recipe, &lag->pf_tx_rule_id,
> +ICE_FLTR_TX, add);
> + if (err) {
> + ice_lag_cfg_fltr(lag, act, lag->pf_recipe, &lag->pf_rx_rule_id,
> +  ICE_FLTR_RX, !add);
> + return err;
> + }
>  
> - return ice_lag_cfg_fltr(lag, act, lag->pf_recipe,
> - &lag->pf_rule_id, add);
> + return 0;
>  }

nit: perhaps this could be more idiomatically written using a
 goto to unwind on error.

...


[Intel-wired-lan] [PATCH iwl-next v3] ice: Remove and readd netdev during devlink reload

2024-01-29 Thread Wojciech Drewek
Recent changes to the devlink reload (commit 9b2348e2d6c9
("devlink: warn about existing entities during reload-reinit"))
force the drivers to destroy devlink ports during reinit.
Adjust ice driver to this requirement, unregister netdvice, destroy
devlink port. ice_init_eth() was removed and all the common code
between probe and reload was moved to ice_load().

During devlink reload we can't take devl_lock (it's already taken)
and in ice_probe() we have to lock it. Use devl_* variant of the API
which does not acquire and release devl_lock. Guard ice_load()
with devl_lock only in case of probe.

Introduce ice_debugfs_fwlog_deinit() in order to release PF's
debugfs entries. Move ice_debugfs_exit() call to ice_module_exit().

Suggested-by: Jiri Pirko 
Reviewed-by: Przemek Kitszel 
Reviewed-by: Vadim Fedorenko 
Reviewed-by: Simon Horman 
Signed-off-by: Wojciech Drewek 
---
v2: empty init removed in ice_devlink_reinit_up
v3: refactor locking pattern as Brett suggested
---
 drivers/net/ethernet/intel/ice/ice.h |   3 +
 drivers/net/ethernet/intel/ice/ice_debugfs.c |  10 +
 drivers/net/ethernet/intel/ice/ice_devlink.c |  68 ++-
 drivers/net/ethernet/intel/ice/ice_fwlog.c   |   2 +
 drivers/net/ethernet/intel/ice/ice_main.c| 189 ++-
 5 files changed, 139 insertions(+), 133 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice.h 
b/drivers/net/ethernet/intel/ice/ice.h
index e841f6c4f1c4..8c3cd78ad60c 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -897,6 +897,7 @@ static inline bool ice_is_adq_active(struct ice_pf *pf)
 }
 
 void ice_debugfs_fwlog_init(struct ice_pf *pf);
+void ice_debugfs_pf_deinit(struct ice_pf *pf);
 void ice_debugfs_init(void);
 void ice_debugfs_exit(void);
 void ice_pf_fwlog_update_module(struct ice_pf *pf, int log_level, int module);
@@ -984,6 +985,8 @@ void ice_service_task_schedule(struct ice_pf *pf);
 int ice_load(struct ice_pf *pf);
 void ice_unload(struct ice_pf *pf);
 void ice_adv_lnk_speed_maps_init(void);
+int ice_init_dev(struct ice_pf *pf);
+void ice_deinit_dev(struct ice_pf *pf);
 
 /**
  * ice_set_rdma_cap - enable RDMA support
diff --git a/drivers/net/ethernet/intel/ice/ice_debugfs.c 
b/drivers/net/ethernet/intel/ice/ice_debugfs.c
index c2bfba6b9ead..ba396b22bb7d 100644
--- a/drivers/net/ethernet/intel/ice/ice_debugfs.c
+++ b/drivers/net/ethernet/intel/ice/ice_debugfs.c
@@ -647,6 +647,16 @@ void ice_debugfs_fwlog_init(struct ice_pf *pf)
kfree(fw_modules);
 }
 
+/**
+ * ice_debugfs_pf_deinit - cleanup PF's debugfs
+ * @pf: pointer to the PF struct
+ */
+void ice_debugfs_pf_deinit(struct ice_pf *pf)
+{
+   debugfs_remove_recursive(pf->ice_debugfs_pf);
+   pf->ice_debugfs_pf = NULL;
+}
+
 /**
  * ice_debugfs_init - create root directory for debugfs entries
  */
diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.c 
b/drivers/net/ethernet/intel/ice/ice_devlink.c
index 97182bacafa3..cc717175178b 100644
--- a/drivers/net/ethernet/intel/ice/ice_devlink.c
+++ b/drivers/net/ethernet/intel/ice/ice_devlink.c
@@ -444,6 +444,20 @@ ice_devlink_reload_empr_start(struct ice_pf *pf,
return 0;
 }
 
+/**
+ * ice_devlink_reinit_down - unload given PF
+ * @pf: pointer to the PF struct
+ */
+static void ice_devlink_reinit_down(struct ice_pf *pf)
+{
+   /* No need to take devl_lock, it's already taken by devlink API */
+   ice_unload(pf);
+   rtnl_lock();
+   ice_vsi_decfg(ice_get_main_vsi(pf));
+   rtnl_unlock();
+   ice_deinit_dev(pf);
+}
+
 /**
  * ice_devlink_reload_down - prepare for reload
  * @devlink: pointer to the devlink instance to reload
@@ -477,7 +491,7 @@ ice_devlink_reload_down(struct devlink *devlink, bool 
netns_change,
   "Remove all VFs before doing 
reinit\n");
return -EOPNOTSUPP;
}
-   ice_unload(pf);
+   ice_devlink_reinit_down(pf);
return 0;
case DEVLINK_RELOAD_ACTION_FW_ACTIVATE:
return ice_devlink_reload_empr_start(pf, extack);
@@ -1269,6 +1283,45 @@ static int ice_devlink_set_parent(struct devlink_rate 
*devlink_rate,
return status;
 }
 
+/**
+ * ice_devlink_reinit_up - do reinit of the given PF
+ * @pf: pointer to the PF struct
+ */
+static int ice_devlink_reinit_up(struct ice_pf *pf)
+{
+   struct ice_vsi *vsi = ice_get_main_vsi(pf);
+   struct ice_vsi_cfg_params params;
+   int err;
+
+   err = ice_init_dev(pf);
+   if (err)
+   return err;
+
+   params = ice_vsi_to_params(vsi);
+   params.flags = ICE_VSI_FLAG_INIT;
+
+   rtnl_lock();
+   err = ice_vsi_cfg(vsi, ¶ms);
+   rtnl_unlock();
+   if (err)
+   goto err_vsi_cfg;
+
+   /* No need to take devl_lock, it's already taken by devlink API */
+   err = ice_load(pf);
+   if (err)
+   goto err_load;
+
+   return 0;
+
+err_load:
+   rtnl_lock(

Re: [Intel-wired-lan] [PATCH v2 0/7 iwl-next] idpf: refactor virtchnl messages

2024-01-29 Thread Alexander Lobakin
From: Alan Brady 
Date: Thu, 25 Jan 2024 21:47:40 -0800

> The motivation for this series has two primary goals. We want to enable
> support of multiple simultaneous messages and make the channel more
> robust. The way it works right now, the driver can only send and receive
> a single message at a time and if something goes really wrong, it can
> lead to data corruption and strange bugs.

[...]

There are a fistful of functions in this series and IDPF's virtchnl code
in general that allocate a memory chunk via kzalloc() family and then
free it at the end of the function, i.e. the lifetime of those buffers
are the lifetime of the function.
Since recently, we have auto-variables in the kernel, so that the pieces
I described could be converted to:

struct x *ptr __free(kfree) = NULL;

ptr = kzalloc(sizeof(*x), GPF_KERNEL);

// some code

return 0; // kfree() is not needed anymore

err:
return err; // here as well

That would allow to simplify the code and reduce its size.
I'd like you to convert such functions to use auto-variables.

Thanks,
Olek


Re: [Intel-wired-lan] [PATCH net-next RESENT v3] ethtool: ice: Support for RSS settings to GTP from ethtool

2024-01-29 Thread Marcin Szycik



On 27.01.2024 15:07, Takeru Hayasaka wrote:
> This is a patch that enables RSS functionality for GTP packets using ethtool.
> 
> A user can include her TEID and make RSS work for GTP-U over IPv4 by doing 
> the following:
> `ethtool -N ens3 rx-flow-hash gtpu4 sde`
> 
> In addition to gtpu(4|6), we now support 
> gtpc(4|6),gtpc(4|6)t,gtpu(4|6)e,gtpu(4|6)u, and gtpu(4|6)d.
> 
> gtpc(4|6): Used for GTP-C in IPv4 and IPv6, where the GTP header format does 
> not include a TEID.
> gtpc(4|6)t: Used for GTP-C in IPv4 and IPv6, with a GTP header format that 
> includes a TEID.
> gtpu(4|6): Used for GTP-U in both IPv4 and IPv6 scenarios.
> gtpu(4|6)e: Used for GTP-U with extended headers in both IPv4 and IPv6.
> gtpu(4|6)u: Used when the PSC (PDU session container) in the GTP-U extended 
> header includes Uplink, applicable to both IPv4 and IPv6.
> gtpu(4|6)d: Used when the PSC in the GTP-U extended header includes Downlink, 
> for both IPv4 and IPv6.

Do I understand correctly that all gtpu* include TEID? Maybe write it here.

> GTP generates a flow that includes an ID called TEID to identify the tunnel. 
> This tunnel is created for each UE (User Equipment).By performing RSS based 
> on this flow, it is possible to apply RSS for each communication unit from 
> the UE.Without this, RSS would only be effective within the range of IP 
> addresses. 
> For instance, the PGW can only perform RSS within the IP range of the 
> SGW.Problematic from a load distribution perspective, especially if there's a 
> bias in the terminals connected to a particular base station.This case can be 
> solved by using this patch.

It would be nice to see a link to the patch that added GTP and 'e' flag support
to ethtool itself ("ethtool: add support for rx-flow-hash gtp").

> Signed-off-by: Takeru Hayasaka 
> ---
> (I have resent the submission after making revisions based on Paul's advice.)
> Sorry for the delay.
> I've been swamped with other work and fell behind. 
> Since Harald has been supportive of the basic structure in the previous patch 
> review, I've kept it largely unchanged but added some comments and 
> documentation.
> I would appreciate it if you could review it again.

Please wrap all text to 80 columns and add missing spaces after ',' and '.'.

>  .../device_drivers/ethernet/intel/ice.rst | 23 --
>  drivers/net/ethernet/intel/ice/ice_ethtool.c  | 74 +++
>  drivers/net/ethernet/intel/ice/ice_flow.h | 22 ++
>  drivers/net/ethernet/intel/ice/ice_lib.c  | 37 ++
>  include/uapi/linux/ethtool.h  | 41 ++
>  5 files changed, 192 insertions(+), 5 deletions(-)
> 
> diff --git a/Documentation/networking/device_drivers/ethernet/intel/ice.rst 
> b/Documentation/networking/device_drivers/ethernet/intel/ice.rst
> index 5038e54586af..6bc1c6f10617 100644
> --- a/Documentation/networking/device_drivers/ethernet/intel/ice.rst
> +++ b/Documentation/networking/device_drivers/ethernet/intel/ice.rst
> @@ -368,16 +368,29 @@ more options for Receive Side Scaling (RSS) hash byte 
> configuration.
># ethtool -N  rx-flow-hash  
>  
>Where  is:
> -tcp4  signifying TCP over IPv4
> -udp4  signifying UDP over IPv4
> -tcp6  signifying TCP over IPv6
> -udp6  signifying UDP over IPv6
> +tcp4signifying TCP over IPv4
> +udp4signifying UDP over IPv4
> +gtpc4   signifying GTP-C over IPv4
> +gtpc4t  signifying GTP-C (include TEID) over IPv4
> +gtpu4   signifying GTP-U over IPV4
> +gtpu4e  signifying GTP-U and Extension Header over IPV4
> +gtpu4u  signifying GTP-U PSC Uplink over IPV4
> +gtpu4d  signifying GTP-U PSC Downlink over IPV4
> +tcp6signifying TCP over IPv6
> +udp6signifying UDP over IPv6
> +gtpc6   signifying GTP-C over IPv6
> +gtpc6t  signifying GTP-C (include TEID) over IPv6
> +gtpu6   signifying GTP-U over IPV6
> +gtpu6e  signifying GTP-U and Extension Header over IPV6
> +gtpu6u  signifying GTP-U PSC Uplink over IPV6
> +gtpu6d  signifying GTP-U PSC Downlink over IPV6
> +
>And  is one or more of:
>  s Hash on the IP source address of the Rx packet.
>  d Hash on the IP destination address of the Rx packet.
>  f Hash on bytes 0 and 1 of the Layer 4 header of the Rx packet.
>  n Hash on bytes 2 and 3 of the Layer 4 header of the Rx packet.
> -
> +e Hash on GTP Packet on TEID(4byte) of the Rx packet.

gtpc(4|6) doesn't include TEID, so what is its purpose?
s/TEID(4byte)/TEID (4bytes)/
Also, I think two newlines should remain here.

>  Accelerated Receive Flow Steering (aRFS)
>  
> diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c 
> b/drivers/net/ethernet/intel/ice/ice_ethtool.c
> index a19b06f18e40..eb5f490c6127 100644
> --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
> +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
> @@ -2486,6 +2486,21 @@ static u32 ice_parse_hdrs(struct ethtool_r

Re: [Intel-wired-lan] [PATCH v2 0/7 iwl-next] idpf: refactor virtchnl messages

2024-01-29 Thread Brady, Alan
> -Original Message-
> From: Lobakin, Aleksander 
> Sent: Monday, January 29, 2024 5:24 AM
> To: Brady, Alan 
> Cc: intel-wired-...@lists.osuosl.org; net...@vger.kernel.org;
> willemdebruijn.ker...@gmail.com; Bagnucki, Igor
> ; Kitszel, Przemyslaw
> 
> Subject: Re: [Intel-wired-lan] [PATCH v2 0/7 iwl-next] idpf: refactor virtchnl
> messages
> 
> From: Alan Brady 
> Date: Thu, 25 Jan 2024 21:47:40 -0800
> 
> > The motivation for this series has two primary goals. We want to
> > enable support of multiple simultaneous messages and make the channel
> > more robust. The way it works right now, the driver can only send and
> > receive a single message at a time and if something goes really wrong,
> > it can lead to data corruption and strange bugs.
> 
> [...]
> 
> There are a fistful of functions in this series and IDPF's virtchnl code in 
> general
> that allocate a memory chunk via kzalloc() family and then free it at the end 
> of
> the function, i.e. the lifetime of those buffers are the lifetime of the 
> function.
> Since recently, we have auto-variables in the kernel, so that the pieces I
> described could be converted to:
> 
>   struct x *ptr __free(kfree) = NULL;
> 
>   ptr = kzalloc(sizeof(*x), GPF_KERNEL);
> 
>   // some code
> 
>   return 0; // kfree() is not needed anymore
> 
> err:
>   return err; // here as well
> 
> That would allow to simplify the code and reduce its size.
> I'd like you to convert such functions to use auto-variables.

Certainly, should be straightforward and make the code much better, sounds good 
to me. Just to clarify I'm only going to mess with the virtchnl functions I've 
otherwise altered in this patch series to maintain appropriate scope, yes?

-Alan

> 
> Thanks,
> Olek


Re: [Intel-wired-lan] [PATCH v2 0/7 iwl-next] idpf: refactor virtchnl messages

2024-01-29 Thread Alexander Lobakin
From: Brady, Alan 
Date: Mon, 29 Jan 2024 16:12:06 +0100

>> -Original Message-
>> From: Lobakin, Aleksander 
>> Sent: Monday, January 29, 2024 5:24 AM
>> To: Brady, Alan 
>> Cc: intel-wired-...@lists.osuosl.org; net...@vger.kernel.org;
>> willemdebruijn.ker...@gmail.com; Bagnucki, Igor
>> ; Kitszel, Przemyslaw
>> 
>> Subject: Re: [Intel-wired-lan] [PATCH v2 0/7 iwl-next] idpf: refactor 
>> virtchnl
>> messages
>>
>> From: Alan Brady 
>> Date: Thu, 25 Jan 2024 21:47:40 -0800
>>
>>> The motivation for this series has two primary goals. We want to
>>> enable support of multiple simultaneous messages and make the channel
>>> more robust. The way it works right now, the driver can only send and
>>> receive a single message at a time and if something goes really wrong,
>>> it can lead to data corruption and strange bugs.
>>
>> [...]
>>
>> There are a fistful of functions in this series and IDPF's virtchnl code in 
>> general
>> that allocate a memory chunk via kzalloc() family and then free it at the 
>> end of
>> the function, i.e. the lifetime of those buffers are the lifetime of the 
>> function.
>> Since recently, we have auto-variables in the kernel, so that the pieces I
>> described could be converted to:
>>
>>  struct x *ptr __free(kfree) = NULL;
>>
>>  ptr = kzalloc(sizeof(*x), GPF_KERNEL);
>>
>>  // some code
>>
>>  return 0; // kfree() is not needed anymore
>>
>> err:
>>  return err; // here as well
>>
>> That would allow to simplify the code and reduce its size.
>> I'd like you to convert such functions to use auto-variables.
> 
> Certainly, should be straightforward and make the code much better, sounds 
> good to me. Just to clarify I'm only going to mess with the virtchnl 
> functions I've otherwise altered in this patch series to maintain appropriate 
> scope, yes?

Yes, only virtchnl functions. New functions that you introduce 100%, the
rest only if you touch them.

> 
> -Alan
> 
>>
>> Thanks,
>> Olek

Thanks,
Olek


Re: [Intel-wired-lan] [PATCH iwl-next v3] ice: Remove and readd netdev during devlink reload

2024-01-29 Thread Brett Creeley




On 1/29/2024 4:32 AM, Wojciech Drewek wrote:

Caution: This message originated from an External Source. Use proper caution 
when opening attachments, clicking links, or responding.


Recent changes to the devlink reload (commit 9b2348e2d6c9
("devlink: warn about existing entities during reload-reinit"))
force the drivers to destroy devlink ports during reinit.
Adjust ice driver to this requirement, unregister netdvice, destroy
devlink port. ice_init_eth() was removed and all the common code
between probe and reload was moved to ice_load().

During devlink reload we can't take devl_lock (it's already taken)
and in ice_probe() we have to lock it. Use devl_* variant of the API
which does not acquire and release devl_lock. Guard ice_load()
with devl_lock only in case of probe.

Introduce ice_debugfs_fwlog_deinit() in order to release PF's
debugfs entries. Move ice_debugfs_exit() call to ice_module_exit().


Nit, but the function is no longer ice_debugfs_fwlog_deinit() as it 
changed from v2->v3.


Other than that, LGTM.

Reviewed-by: Brett Creeley 



Suggested-by: Jiri Pirko 
Reviewed-by: Przemek Kitszel 
Reviewed-by: Vadim Fedorenko 
Reviewed-by: Simon Horman 
Signed-off-by: Wojciech Drewek 
---
v2: empty init removed in ice_devlink_reinit_up
v3: refactor locking pattern as Brett suggested
---
  drivers/net/ethernet/intel/ice/ice.h |   3 +
  drivers/net/ethernet/intel/ice/ice_debugfs.c |  10 +
  drivers/net/ethernet/intel/ice/ice_devlink.c |  68 ++-
  drivers/net/ethernet/intel/ice/ice_fwlog.c   |   2 +
  drivers/net/ethernet/intel/ice/ice_main.c| 189 ++-
  5 files changed, 139 insertions(+), 133 deletions(-)


[...]


[Intel-wired-lan] [PATCH v3 0/7 iwl-next] idpf: refactor virtchnl messages

2024-01-29 Thread Alan Brady
The motivation for this series has two primary goals. We want to enable
support of multiple simultaneous messages and make the channel more
robust. The way it works right now, the driver can only send and receive
a single message at a time and if something goes really wrong, it can
lead to data corruption and strange bugs.

This works by conceptualizing a send and receive as a "virtchnl
transaction" (idpf_vc_xn) and introducing a "transaction manager"
(idpf_vc_xn_manager). The vcxn_mngr will init a ring of transactions
from which the driver will pop from a bitmap of free transactions to
track in-flight messages. Instead of needing to handle a complicated
send/recv for every a message, the driver now just needs to fill out a
xn_params struct and hand it over to idpf_vc_xn_exec which will take
care of all the messy bits. Once a message is sent and receives a reply,
we leverage the completion API to signal the received buffer is ready to
be used (assuming success, or an error code otherwise).

At a low-level, this implements the "sw cookie" field of the virtchnl
message descriptor to enable this. We have 16 bits we can put whatever
we want and the recipient is required to apply the same cookie to the
reply for that message.  We use the first 8 bits as an index into the
array of transactions to enable fast lookups and we use the second 8
bits as a salt to make sure each cookie is unique for that message. As
transactions are received in arbitrary order, it's possible to reuse a
transaction index and the salt guards against index conflicts to make
certain the lookup is correct. As a primitive example, say index 1 is
used with salt 1. The message times out without receiving a reply so
index 1 is renewed to be ready for a new transaction, we report the
timeout, and send the message again. Since index 1 is free to be used
again now, index 1 is again sent but now salt is 2. This time we do get
a reply, however it could be that the reply is _actually_ for the
previous send index 1 with salt 1.  Without the salt we would have no
way of knowing for sure if it's the correct reply, but with we will know
for certain.

Through this conversion we also get several other benefits. We can now
more appropriately handle asynchronously sent messages by providing
space for a callback to be defined. This notably allows us to handle MAC
filter failures better; previously we could potentially have stale,
failed filters in our list, which shouldn't really have a major impact
but is obviously not correct. I also managed to remove fairly
significant more lines than I added which is a win in my book.

Additionally, this converts some variables to use auto-variables where
appropriate. This makes the alloc paths much cleaner and less prone to
memory leaks.


Alan Brady (7):
  idpf: implement virtchnl transaction manager
  idpf: refactor vport virtchnl messages
  idpf: refactor queue related virtchnl messages
  idpf: refactor remaining virtchnl messages
  idpf: add async_handler for MAC filter messages
  idpf: refactor idpf_recv_mb_msg
  idpf: cleanup virtchnl cruft

 drivers/net/ethernet/intel/idpf/idpf.h|  192 +-
 .../ethernet/intel/idpf/idpf_controlq_api.h   |5 +
 drivers/net/ethernet/intel/idpf/idpf_lib.c|   29 +-
 drivers/net/ethernet/intel/idpf/idpf_main.c   |3 +-
 drivers/net/ethernet/intel/idpf/idpf_vf_dev.c |2 +-
 .../net/ethernet/intel/idpf/idpf_virtchnl.c   | 2191 -
 6 files changed, 1086 insertions(+), 1336 deletions(-)

-- 
v1 -> v2:
- don't take spin_lock in idpf_vc_xn_init, it's not needed
- fix set but unused error on payload_size var in idpf_recv_mb_msg
- prefer bitmap_fill and bitmap_zero if not setting an explicit
  range per documention
- remove a couple unnecessary casts in idpf_send_get_stats_msg and
  idpf_send_get_rx_ptype_msg
- split patch 4/6 such that the added functionality for MAC filters
  is separate
v2 -> v3:
- fix 'mac' -> 'MAC' in async handler error messages
- fix size_t format specifier in async handler error message
- change some variables to use auto-variables instead
2.40.1



[Intel-wired-lan] [PATCH v3 1/7 iwl-next] idpf: implement virtchnl transaction manager

2024-01-29 Thread Alan Brady
This starts refactoring how virtchnl messages are handled by adding a
transaction manager (idpf_vc_xn_manager).

There are two primary motivations here which are to enable handling of
multiple messages at once and to make it more robust in general. As it
is right now, the driver may only have one pending message at a time and
there's no guarantee that the response we receive was actually intended
for the message we sent prior.

This works by utilizing a "cookie" field of the message descriptor. It
is arbitrary what data we put in the cookie and the response is required
to have the same cookie the original message was sent with. Then using a
"transaction" abstraction that uses the completion API to pair responses
to the message it belongs to.

The cookie works such that the first half is the index to the
transaction in our array, and the second half is a "salt" that gets
incremented every message. This enables quick lookups into the array and
also ensuring we have the correct message. The salt is necessary because
after, for example, a message times out and we deem the response was
lost for some reason, we could theoretically reuse the same index but
using a different salt ensures that when we do actually get a response
it's not the old message that timed out previously finally coming in.
Since the number of transactions allocated is U8_MAX and the salt is 8
bits, we can never have a conflict because we can't roll over the salt
without using more transactions than we have available.

This starts by only converting the VIRTCHNL2_OP_VERSION message to use
this new transaction API. Follow up patches will convert all virtchnl
messages to use the API.

While we're here we can make idpf_send_mb_msg a little better by using
auto-variables.

Reviewed-by: Przemek Kitszel 
Reviewed-by: Igor Bagnucki 
Co-developed-by: Joshua Hay 
Signed-off-by: Joshua Hay 
Signed-off-by: Alan Brady 
---
 drivers/net/ethernet/intel/idpf/idpf.h| 101 +++-
 .../ethernet/intel/idpf/idpf_controlq_api.h   |   5 +
 drivers/net/ethernet/intel/idpf/idpf_lib.c|   2 +
 drivers/net/ethernet/intel/idpf/idpf_main.c   |   1 +
 drivers/net/ethernet/intel/idpf/idpf_vf_dev.c |   2 +-
 .../net/ethernet/intel/idpf/idpf_virtchnl.c   | 538 +++---
 6 files changed, 554 insertions(+), 95 deletions(-)

diff --git a/drivers/net/ethernet/intel/idpf/idpf.h 
b/drivers/net/ethernet/intel/idpf/idpf.h
index 0acc125decb3..92103c3cf5a0 100644
--- a/drivers/net/ethernet/intel/idpf/idpf.h
+++ b/drivers/net/ethernet/intel/idpf/idpf.h
@@ -39,6 +39,11 @@ struct idpf_vport_max_q;
((IDPF_CTLQ_MAX_BUF_LEN - (struct_sz)) / (chunk_sz))
 #define IDPF_WAIT_FOR_EVENT_TIMEO_MIN  2000
 #define IDPF_WAIT_FOR_EVENT_TIMEO  6
+#define IDPF_VC_XN_MIN_TIMEOUT_MSEC2000
+#define IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC(60 * 1000)
+#define IDPF_VC_XN_IDX_M   GENMASK(7, 0)
+#define IDPF_VC_XN_SALT_M  GENMASK(15, 8)
+#define IDPF_VC_XN_RING_LENU8_MAX
 
 #define IDPF_MAX_WAIT  500
 
@@ -66,14 +71,12 @@ struct idpf_mac_filter {
 
 /**
  * enum idpf_state - State machine to handle bring up
- * @__IDPF_STARTUP: Start the state machine
  * @__IDPF_VER_CHECK: Negotiate virtchnl version
  * @__IDPF_GET_CAPS: Negotiate capabilities
  * @__IDPF_INIT_SW: Init based on given capabilities
  * @__IDPF_STATE_LAST: Must be last, used to determine size
  */
 enum idpf_state {
-   __IDPF_STARTUP,
__IDPF_VER_CHECK,
__IDPF_GET_CAPS,
__IDPF_INIT_SW,
@@ -314,6 +317,96 @@ struct idpf_port_stats {
struct virtchnl2_vport_stats vport_stats;
 };
 
+/**
+ * enum idpf_vc_xn_state - Virtchnl transaction status
+ * @IDPF_VC_XN_IDLE: not expecting a reply, ready to be used
+ * @IDPF_VC_XN_WAITING: expecting a reply, not yet received
+ * @IDPF_VC_XN_COMPLETED_SUCCESS: a reply was expected and received,
+ *   buffer updated
+ * @IDPF_VC_XN_COMPLETED_FAILED: a reply was expected and received, but there
+ *  was an error, buffer not updated
+ * @IDPF_VC_XN_SHUTDOWN: transaction object cannot be used, VC torn down
+ * @IDPF_VC_XN_ASYNC: transaction sent asynchronously and doesn't have the
+ *   return context; a callback may be provided to handle
+ *   return
+ */
+enum idpf_vc_xn_state {
+   IDPF_VC_XN_IDLE = 1,
+   IDPF_VC_XN_WAITING,
+   IDPF_VC_XN_COMPLETED_SUCCESS,
+   IDPF_VC_XN_COMPLETED_FAILED,
+   IDPF_VC_XN_SHUTDOWN,
+   IDPF_VC_XN_ASYNC,
+};
+
+struct idpf_vc_xn;
+/* Callback for asynchronous messages */
+typedef int (*async_vc_cb) (struct idpf_adapter *, struct idpf_vc_xn *,
+   const struct idpf_ctlq_msg *);
+
+/**
+ * struct idpf_vc_xn - Data structure representing virtchnl transactions
+ * @completed: virtchnl event loop uses that to signal when a reply is
+ *available, uses kernel completion API
+ * @state: virtchnl event loop stores the da

[Intel-wired-lan] [PATCH v3 2/7 iwl-next] idpf: refactor vport virtchnl messages

2024-01-29 Thread Alan Brady
This reworks the way vport related virtchnl messages work to take
advantage of the added transaction API. It is fairly mechanical as, to
use the transaction API, the function just needs to fill out an
appropriate idpf_vc_xn_params struct to pass to idpf_vc_xn_exec which
will take care of the actual send and recv.

Reviewed-by: Przemek Kitszel 
Reviewed-by: Igor Bagnucki 
Co-developed-by: Joshua Hay 
Signed-off-by: Joshua Hay 
Signed-off-by: Alan Brady 
---
 .../net/ethernet/intel/idpf/idpf_virtchnl.c   | 185 +++---
 1 file changed, 69 insertions(+), 116 deletions(-)

diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c 
b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
index 06c1edd4e5ea..e7aa381064ac 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
@@ -745,7 +745,6 @@ int idpf_recv_mb_msg(struct idpf_adapter *adapter, u32 op,
 
while (1) {
struct idpf_vport_config *vport_config;
-   int payload_size = 0;
 
/* Try to get one message */
num_q_msg = 1;
@@ -782,47 +781,17 @@ int idpf_recv_mb_msg(struct idpf_adapter *adapter, u32 op,
if (err)
goto post_buffs;
 
-   if (ctlq_msg.data_len)
-   payload_size = ctlq_msg.ctx.indirect.payload->size;
-
/* All conditions are met. Either a message requested is
 * received or we received a message to be processed
 */
switch (ctlq_msg.cookie.mbx.chnl_opcode) {
case VIRTCHNL2_OP_VERSION:
-   err = idpf_vc_xn_forward_reply(adapter, &ctlq_msg);
-   break;
case VIRTCHNL2_OP_GET_CAPS:
-   if (ctlq_msg.cookie.mbx.chnl_retval) {
-   dev_err(&adapter->pdev->dev, "Failure 
initializing, vc op: %u retval: %u\n",
-   ctlq_msg.cookie.mbx.chnl_opcode,
-   ctlq_msg.cookie.mbx.chnl_retval);
-   err = -EBADMSG;
-   } else if (msg) {
-   memcpy(msg, ctlq_msg.ctx.indirect.payload->va,
-  min_t(int, payload_size, msg_size));
-   }
-   work_done = true;
-   break;
case VIRTCHNL2_OP_CREATE_VPORT:
-   idpf_recv_vchnl_op(adapter, NULL, &ctlq_msg,
-  IDPF_VC_CREATE_VPORT,
-  IDPF_VC_CREATE_VPORT_ERR);
-   break;
case VIRTCHNL2_OP_ENABLE_VPORT:
-   idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
-  IDPF_VC_ENA_VPORT,
-  IDPF_VC_ENA_VPORT_ERR);
-   break;
case VIRTCHNL2_OP_DISABLE_VPORT:
-   idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
-  IDPF_VC_DIS_VPORT,
-  IDPF_VC_DIS_VPORT_ERR);
-   break;
case VIRTCHNL2_OP_DESTROY_VPORT:
-   idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
-  IDPF_VC_DESTROY_VPORT,
-  IDPF_VC_DESTROY_VPORT_ERR);
+   err = idpf_vc_xn_forward_reply(adapter, &ctlq_msg);
break;
case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
@@ -1209,7 +1178,9 @@ static int idpf_send_ver_msg(struct idpf_adapter *adapter)
  */
 static int idpf_send_get_caps_msg(struct idpf_adapter *adapter)
 {
-   struct virtchnl2_get_capabilities caps = { };
+   struct virtchnl2_get_capabilities caps = {};
+   struct idpf_vc_xn_params xn_params = {};
+   ssize_t reply_sz;
 
caps.csum_caps =
cpu_to_le32(VIRTCHNL2_CAP_TX_CSUM_L3_IPV4   |
@@ -1266,21 +1237,20 @@ static int idpf_send_get_caps_msg(struct idpf_adapter 
*adapter)
VIRTCHNL2_CAP_PROMISC   |
VIRTCHNL2_CAP_LOOPBACK);
 
-   return idpf_send_mb_msg(adapter, VIRTCHNL2_OP_GET_CAPS, sizeof(caps),
-   (u8 *)&caps, 0);
-}
+   xn_params.vc_op = VIRTCHNL2_OP_GET_CAPS;
+   xn_params.send_buf.iov_base = ∩︀
+   xn_params.send_buf.iov_len = sizeof(caps);
+   xn_params.recv_buf.iov_base = &adapter->caps;
+   xn_params.recv_buf.iov_len = sizeof(adapter->caps);
+   xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC;
 
-/**
- * idpf_recv_get_caps_msg - Receive virtchnl get capabilities message
- * @adapter: Driver specific private struct

[Intel-wired-lan] [PATCH v3 3/7 iwl-next] idpf: refactor queue related virtchnl messages

2024-01-29 Thread Alan Brady
This reworks queue specific virtchnl messages to use the added
transaction API.  It is fairly mechanical and generally makes the
functions using it more simple. Functions using transaction API no
longer need to take the vc_buf_lock since it's not using it anymore.
After filling out an idpf_vc_xn_params struct, idpf_vc_xn_exec takes
care of the send and recv handling.

This also converts those functions where appropriate to use
auto-variables instead of manually calling kfree. This greatly
simplifies the memory alloc paths and makes them less prone memory
leaks.

Reviewed-by: Przemek Kitszel 
Reviewed-by: Igor Bagnucki 
Signed-off-by: Alan Brady 
---
 drivers/net/ethernet/intel/idpf/idpf.h|   2 +-
 .../net/ethernet/intel/idpf/idpf_virtchnl.c   | 393 ++
 2 files changed, 136 insertions(+), 259 deletions(-)

diff --git a/drivers/net/ethernet/intel/idpf/idpf.h 
b/drivers/net/ethernet/intel/idpf/idpf.h
index 92103c3cf5a0..393f6e46012f 100644
--- a/drivers/net/ethernet/intel/idpf/idpf.h
+++ b/drivers/net/ethernet/intel/idpf/idpf.h
@@ -648,7 +648,7 @@ struct idpf_vector_lifo {
 struct idpf_vport_config {
struct idpf_vport_user_config_data user_config;
struct idpf_vport_max_q max_q;
-   void *req_qs_chunks;
+   struct virtchnl2_add_queues *req_qs_chunks;
spinlock_t mac_filter_list_lock;
DECLARE_BITMAP(flags, IDPF_VPORT_CONFIG_FLAGS_NBITS);
 };
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c 
b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
index e7aa381064ac..3c6116f24286 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
@@ -791,47 +791,15 @@ int idpf_recv_mb_msg(struct idpf_adapter *adapter, u32 op,
case VIRTCHNL2_OP_ENABLE_VPORT:
case VIRTCHNL2_OP_DISABLE_VPORT:
case VIRTCHNL2_OP_DESTROY_VPORT:
-   err = idpf_vc_xn_forward_reply(adapter, &ctlq_msg);
-   break;
case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-   idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
-  IDPF_VC_CONFIG_TXQ,
-  IDPF_VC_CONFIG_TXQ_ERR);
-   break;
case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-   idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
-  IDPF_VC_CONFIG_RXQ,
-  IDPF_VC_CONFIG_RXQ_ERR);
-   break;
case VIRTCHNL2_OP_ENABLE_QUEUES:
-   idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
-  IDPF_VC_ENA_QUEUES,
-  IDPF_VC_ENA_QUEUES_ERR);
-   break;
case VIRTCHNL2_OP_DISABLE_QUEUES:
-   idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
-  IDPF_VC_DIS_QUEUES,
-  IDPF_VC_DIS_QUEUES_ERR);
-   break;
case VIRTCHNL2_OP_ADD_QUEUES:
-   idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
-  IDPF_VC_ADD_QUEUES,
-  IDPF_VC_ADD_QUEUES_ERR);
-   break;
case VIRTCHNL2_OP_DEL_QUEUES:
-   idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
-  IDPF_VC_DEL_QUEUES,
-  IDPF_VC_DEL_QUEUES_ERR);
-   break;
case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
-   idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
-  IDPF_VC_MAP_IRQ,
-  IDPF_VC_MAP_IRQ_ERR);
-   break;
case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-   idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
-  IDPF_VC_UNMAP_IRQ,
-  IDPF_VC_UNMAP_IRQ_ERR);
+   err = idpf_vc_xn_forward_reply(adapter, &ctlq_msg);
break;
case VIRTCHNL2_OP_GET_STATS:
idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
@@ -1766,11 +1734,13 @@ int idpf_send_disable_vport_msg(struct idpf_vport 
*vport)
  */
 static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport)
 {
-   struct virtchnl2_config_tx_queues *ctq;
+   struct virtchnl2_config_tx_queues *ctq __free(kfree) = NULL;
+   struct virtchnl2_txq_info *qi __free(kfree) = NULL;
+   struct idpf_vc_xn_params xn_params = {};
u32 config_sz, chunk_sz, buf_sz;
int totqs, num_msgs, num_chunks;
-   struct virtchnl2_txq_info *qi;
-   int err = 0, i, k = 0;
+

[Intel-wired-lan] [PATCH v3 4/7 iwl-next] idpf: refactor remaining virtchnl messages

2024-01-29 Thread Alan Brady
This takes care of RSS/SRIOV/MAC and other misc virtchnl messages. This
again is mostly mechanical.

In absence of an async_handler for MAC filters, this will simply
generically report any errors from idpf_vc_xn_forward_async. This
maintains the existing behavior. Follow up patch will add an async
handler for MAC filters to remove bad filters from our list.

While we're here we can also make the code much nicer by converting some
variables to auto-variables where appropriate. This makes it cleaner and
less prone to memory leaking.

There's still a bit more cleanup we can do here to remove stuff that's
not being used anymore now; follow-up patches will take care of loose
ends.

Signed-off-by: Alan Brady 
---
 .../net/ethernet/intel/idpf/idpf_virtchnl.c   | 894 ++
 1 file changed, 298 insertions(+), 596 deletions(-)

diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c 
b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
index 3c6116f24286..36495e0b5cf5 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
@@ -3,6 +3,64 @@
 
 #include "idpf.h"
 
+/**
+ * idpf_vid_to_vport - Translate vport id to vport pointer
+ * @adapter: private data struct
+ * @v_id: vport id to translate
+ *
+ * Returns vport matching v_id, NULL if not found.
+ */
+static
+struct idpf_vport *idpf_vid_to_vport(struct idpf_adapter *adapter, u32 v_id)
+{
+   u16 num_max_vports = idpf_get_max_vports(adapter);
+   int i;
+
+   for (i = 0; i < num_max_vports; i++)
+   if (adapter->vport_ids[i] == v_id)
+   return adapter->vports[i];
+
+   return NULL;
+}
+
+/**
+ * idpf_handle_event_link - Handle link event message
+ * @adapter: private data struct
+ * @v2e: virtchnl event message
+ */
+static void idpf_handle_event_link(struct idpf_adapter *adapter,
+  const struct virtchnl2_event *v2e)
+{
+   struct idpf_netdev_priv *np;
+   struct idpf_vport *vport;
+
+   vport = idpf_vid_to_vport(adapter, le32_to_cpu(v2e->vport_id));
+   if (!vport) {
+   dev_err_ratelimited(&adapter->pdev->dev, "Failed to find 
vport_id %d for link event\n",
+   v2e->vport_id);
+   return;
+   }
+   np = netdev_priv(vport->netdev);
+
+   vport->link_speed_mbps = le32_to_cpu(v2e->link_speed);
+
+   if (vport->link_up == v2e->link_status)
+   return;
+
+   vport->link_up = v2e->link_status;
+
+   if (np->state != __IDPF_VPORT_UP)
+   return;
+
+   if (vport->link_up) {
+   netif_tx_start_all_queues(vport->netdev);
+   netif_carrier_on(vport->netdev);
+   } else {
+   netif_tx_stop_all_queues(vport->netdev);
+   netif_carrier_off(vport->netdev);
+   }
+}
+
 /**
  * idpf_recv_event_msg - Receive virtchnl event message
  * @vport: virtual port structure
@@ -13,33 +71,24 @@
 static void idpf_recv_event_msg(struct idpf_vport *vport,
struct idpf_ctlq_msg *ctlq_msg)
 {
-   struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
+   int payload_size = ctlq_msg->ctx.indirect.payload->size;
struct virtchnl2_event *v2e;
-   bool link_status;
u32 event;
 
+   if (payload_size < sizeof(*v2e)) {
+   dev_err_ratelimited(&vport->adapter->pdev->dev, "Failed to 
receive valid payload for event msg (op %d len %d)\n",
+   ctlq_msg->cookie.mbx.chnl_opcode,
+   payload_size);
+   return;
+   }
+
v2e = (struct virtchnl2_event *)ctlq_msg->ctx.indirect.payload->va;
event = le32_to_cpu(v2e->event);
 
switch (event) {
case VIRTCHNL2_EVENT_LINK_CHANGE:
-   vport->link_speed_mbps = le32_to_cpu(v2e->link_speed);
-   link_status = v2e->link_status;
-
-   if (vport->link_up == link_status)
-   break;
-
-   vport->link_up = link_status;
-   if (np->state == __IDPF_VPORT_UP) {
-   if (vport->link_up) {
-   netif_carrier_on(vport->netdev);
-   netif_tx_start_all_queues(vport->netdev);
-   } else {
-   netif_tx_stop_all_queues(vport->netdev);
-   netif_carrier_off(vport->netdev);
-   }
-   }
-   break;
+   idpf_handle_event_link(vport->adapter, v2e);
+   return;
default:
dev_err(&vport->adapter->pdev->dev,
"Unknown event %d from PF\n", event);
@@ -273,89 +322,6 @@ static int idpf_find_vport(struct idpf_adapter *adapter,
return err;
 }
 
-/**
- * idpf_copy_data_to_vc_buf - Copy the virtchnl response data into the buffer.
- * @adapter: dri

[Intel-wired-lan] [PATCH v3 5/7 iwl-next] idpf: add async_handler for MAC filter messages

2024-01-29 Thread Alan Brady
There are situations where the driver needs to add a MAC filter but
we're explicitly not allowed to sleep so we can wait for a virtchnl
message to complete.

This adds an async_handler for asynchronously sent messages for MAC
filters so that we can better handle if there's an error of some kind.
If success we don't need to do anything else, but if we failed to
program the new filter we really should remove it from our list of MAC
filters. If we don't remove bad filters, what I expect to happen is
after a reset of some kind we try to program the MAC filter again and it
fails again. This is clearly wrong and I would expect to be confusing
for the user.

It could also be the failure is for a delete MAC filter message but
those filters get deleted regardless. Not much we can do about a delete
failure.

Signed-off-by: Alan Brady 
---
 .../net/ethernet/intel/idpf/idpf_virtchnl.c   | 70 +++
 1 file changed, 70 insertions(+)

diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c 
b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
index 36495e0b5cf5..b4535972d03b 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
@@ -3535,6 +3535,75 @@ u32 idpf_get_vport_id(struct idpf_vport *vport)
return le32_to_cpu(vport_msg->vport_id);
 }
 
+/**
+ * idpf_mac_filter_async_handler - Async callback for mac filters
+ * @adapter: private data struct
+ * @xn: transaction for message
+ * @ctlq_msg: received message
+ *
+ * In some scenarios driver can't sleep and wait for a reply (e.g.: stack is
+ * holding rtnl_lock) when adding a new mac filter. It puts us in a difficult
+ * situation to deal with errors returned on the reply. The best we can
+ * ultimately do is remove it from our list of mac filters and report the
+ * error.
+ */
+static int idpf_mac_filter_async_handler(struct idpf_adapter *adapter,
+struct idpf_vc_xn *xn,
+const struct idpf_ctlq_msg *ctlq_msg)
+{
+   struct virtchnl2_mac_addr_list *ma_list;
+   struct idpf_vport_config *vport_config;
+   struct virtchnl2_mac_addr *mac_addr;
+   struct idpf_mac_filter *f, *tmp;
+   struct list_head *ma_list_head;
+   struct idpf_vport *vport;
+   u16 num_entries;
+   int i;
+
+   /* if success we're done, we're only here if something bad happened */
+   if (!ctlq_msg->cookie.mbx.chnl_retval)
+   return 0;
+
+   /* make sure at least struct is there */
+   if (xn->reply_sz < sizeof(*ma_list))
+   goto invalid_payload;
+
+   ma_list = ctlq_msg->ctx.indirect.payload->va;
+   mac_addr = ma_list->mac_addr_list;
+   num_entries = le16_to_cpu(ma_list->num_mac_addr);
+   /* we should have received a buffer at least this big */
+   if (xn->reply_sz < struct_size(ma_list, mac_addr_list, num_entries))
+   goto invalid_payload;
+
+   vport = idpf_vid_to_vport(adapter, le32_to_cpu(ma_list->vport_id));
+   if (!vport)
+   goto invalid_payload;
+
+   vport_config = adapter->vport_config[le32_to_cpu(ma_list->vport_id)];
+   ma_list_head = &vport_config->user_config.mac_filter_list;
+
+   /* We can't do much to reconcile bad filters at this point, however we
+* should at least remove them from our list one way or the other so we
+* have some idea what good filters we have.
+*/
+   spin_lock_bh(&vport_config->mac_filter_list_lock);
+   list_for_each_entry_safe(f, tmp, ma_list_head, list)
+   for (i = 0; i < num_entries; i++)
+   if (ether_addr_equal(mac_addr[i].addr, f->macaddr))
+   list_del(&f->list);
+   spin_unlock_bh(&vport_config->mac_filter_list_lock);
+   dev_err_ratelimited(&adapter->pdev->dev, "Received error sending MAC 
filter request (op %d)\n",
+   xn->vc_op);
+
+   return 0;
+
+invalid_payload:
+   dev_err_ratelimited(&adapter->pdev->dev, "Received invalid MAC filter 
payload (op %d) (len %zd)\n",
+   xn->vc_op, xn->reply_sz);
+
+   return -EINVAL;
+}
+
 /**
  * idpf_add_del_mac_filters - Add/del mac filters
  * @vport: Virtual port data structure
@@ -3562,6 +3631,7 @@ int idpf_add_del_mac_filters(struct idpf_vport *vport,
VIRTCHNL2_OP_DEL_MAC_ADDR;
xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC;
xn_params.async = async;
+   xn_params.async_handler = idpf_mac_filter_async_handler;
 
vport_config = adapter->vport_config[np->vport_idx];
spin_lock_bh(&vport_config->mac_filter_list_lock);
-- 
2.40.1



[Intel-wired-lan] [PATCH v3 6/7 iwl-next] idpf: refactor idpf_recv_mb_msg

2024-01-29 Thread Alan Brady
Now that all the messages are using the transaction API, we can rework
idpf_recv_mb_msg quite a lot to simplify it. Due to this, we remove
idpf_find_vport as no longer used and alter idpf_recv_event_msg
slightly.

Reviewed-by: Przemek Kitszel 
Reviewed-by: Igor Bagnucki 
Signed-off-by: Alan Brady 
---
 drivers/net/ethernet/intel/idpf/idpf.h|   3 +-
 drivers/net/ethernet/intel/idpf/idpf_lib.c|   2 +-
 .../net/ethernet/intel/idpf/idpf_virtchnl.c   | 254 +++---
 3 files changed, 35 insertions(+), 224 deletions(-)

diff --git a/drivers/net/ethernet/intel/idpf/idpf.h 
b/drivers/net/ethernet/intel/idpf/idpf.h
index 393f6e46012f..2d5449b9288a 100644
--- a/drivers/net/ethernet/intel/idpf/idpf.h
+++ b/drivers/net/ethernet/intel/idpf/idpf.h
@@ -1031,8 +1031,7 @@ int idpf_send_get_stats_msg(struct idpf_vport *vport);
 int idpf_get_vec_ids(struct idpf_adapter *adapter,
 u16 *vecids, int num_vecids,
 struct virtchnl2_vector_chunks *chunks);
-int idpf_recv_mb_msg(struct idpf_adapter *adapter, u32 op,
-void *msg, int msg_size);
+int idpf_recv_mb_msg(struct idpf_adapter *adapter);
 int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op,
 u16 msg_size, u8 *msg, u16 cookie);
 void idpf_set_ethtool_ops(struct net_device *netdev);
diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c 
b/drivers/net/ethernet/intel/idpf/idpf_lib.c
index 6b4a7408246c..8135a6b4a71e 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
@@ -1253,7 +1253,7 @@ void idpf_mbx_task(struct work_struct *work)
queue_delayed_work(adapter->mbx_wq, &adapter->mbx_task,
   msecs_to_jiffies(300));
 
-   idpf_recv_mb_msg(adapter, VIRTCHNL2_OP_UNKNOWN, NULL, 0);
+   idpf_recv_mb_msg(adapter);
 }
 
 /**
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c 
b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
index b4535972d03b..9e1ec17c905d 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
@@ -63,12 +63,12 @@ static void idpf_handle_event_link(struct idpf_adapter 
*adapter,
 
 /**
  * idpf_recv_event_msg - Receive virtchnl event message
- * @vport: virtual port structure
+ * @adapter: Driver specific private structure
  * @ctlq_msg: message to copy from
  *
  * Receive virtchnl event message
  */
-static void idpf_recv_event_msg(struct idpf_vport *vport,
+static void idpf_recv_event_msg(struct idpf_adapter *adapter,
struct idpf_ctlq_msg *ctlq_msg)
 {
int payload_size = ctlq_msg->ctx.indirect.payload->size;
@@ -76,7 +76,7 @@ static void idpf_recv_event_msg(struct idpf_vport *vport,
u32 event;
 
if (payload_size < sizeof(*v2e)) {
-   dev_err_ratelimited(&vport->adapter->pdev->dev, "Failed to 
receive valid payload for event msg (op %d len %d)\n",
+   dev_err_ratelimited(&adapter->pdev->dev, "Failed to receive 
valid payload for event msg (op %d len %d)\n",
ctlq_msg->cookie.mbx.chnl_opcode,
payload_size);
return;
@@ -87,10 +87,10 @@ static void idpf_recv_event_msg(struct idpf_vport *vport,
 
switch (event) {
case VIRTCHNL2_EVENT_LINK_CHANGE:
-   idpf_handle_event_link(vport->adapter, v2e);
+   idpf_handle_event_link(adapter, v2e);
return;
default:
-   dev_err(&vport->adapter->pdev->dev,
+   dev_err(&adapter->pdev->dev,
"Unknown event %d from PF\n", event);
break;
}
@@ -203,125 +203,6 @@ int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op,
return 0;
 }
 
-/**
- * idpf_find_vport - Find vport pointer from control queue message
- * @adapter: driver specific private structure
- * @vport: address of vport pointer to copy the vport from adapters vport list
- * @ctlq_msg: control queue message
- *
- * Return 0 on success, error value on failure. Also this function does check
- * for the opcodes which expect to receive payload and return error value if
- * it is not the case.
- */
-static int idpf_find_vport(struct idpf_adapter *adapter,
-  struct idpf_vport **vport,
-  struct idpf_ctlq_msg *ctlq_msg)
-{
-   bool no_op = false, vid_found = false;
-   int i, err = 0;
-   char *vc_msg;
-   u32 v_id;
-
-   vc_msg = kcalloc(IDPF_CTLQ_MAX_BUF_LEN, sizeof(char), GFP_KERNEL);
-   if (!vc_msg)
-   return -ENOMEM;
-
-   if (ctlq_msg->data_len) {
-   size_t payload_size = ctlq_msg->ctx.indirect.payload->size;
-
-   if (!payload_size) {
-   dev_err(&adapter->pdev->dev, "Failed to receive payload 
buffer\n");
-   kfree(vc_msg);
-
-   

[Intel-wired-lan] [PATCH v3 7/7 iwl-next] idpf: cleanup virtchnl cruft

2024-01-29 Thread Alan Brady
We can now remove a bunch of gross code we don't need anymore like the
vc state bits and vc_buf_lock since everything is using transaction API
now.

Reviewed-by: Przemek Kitszel 
Reviewed-by: Igor Bagnucki 
Signed-off-by: Alan Brady 
---
 drivers/net/ethernet/intel/idpf/idpf.h| 86 ---
 drivers/net/ethernet/intel/idpf/idpf_lib.c| 25 +-
 drivers/net/ethernet/intel/idpf/idpf_main.c   |  2 -
 .../net/ethernet/intel/idpf/idpf_virtchnl.c   | 13 ---
 4 files changed, 2 insertions(+), 124 deletions(-)

diff --git a/drivers/net/ethernet/intel/idpf/idpf.h 
b/drivers/net/ethernet/intel/idpf/idpf.h
index 2d5449b9288a..0793173bb36d 100644
--- a/drivers/net/ethernet/intel/idpf/idpf.h
+++ b/drivers/net/ethernet/intel/idpf/idpf.h
@@ -37,8 +37,6 @@ struct idpf_vport_max_q;
 #define IDPF_MB_MAX_ERR20
 #define IDPF_NUM_CHUNKS_PER_MSG(struct_sz, chunk_sz)   \
((IDPF_CTLQ_MAX_BUF_LEN - (struct_sz)) / (chunk_sz))
-#define IDPF_WAIT_FOR_EVENT_TIMEO_MIN  2000
-#define IDPF_WAIT_FOR_EVENT_TIMEO  6
 #define IDPF_VC_XN_MIN_TIMEOUT_MSEC2000
 #define IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC(60 * 1000)
 #define IDPF_VC_XN_IDX_M   GENMASK(7, 0)
@@ -212,71 +210,6 @@ struct idpf_dev_ops {
struct idpf_reg_ops reg_ops;
 };
 
-/* These macros allow us to generate an enum and a matching char * array of
- * stringified enums that are always in sync. Checkpatch issues a bogus warning
- * about this being a complex macro; but it's wrong, these are never used as a
- * statement and instead only used to define the enum and array.
- */
-#define IDPF_FOREACH_VPORT_VC_STATE(STATE) \
-   STATE(IDPF_VC_CREATE_VPORT) \
-   STATE(IDPF_VC_CREATE_VPORT_ERR) \
-   STATE(IDPF_VC_ENA_VPORT)\
-   STATE(IDPF_VC_ENA_VPORT_ERR)\
-   STATE(IDPF_VC_DIS_VPORT)\
-   STATE(IDPF_VC_DIS_VPORT_ERR)\
-   STATE(IDPF_VC_DESTROY_VPORT)\
-   STATE(IDPF_VC_DESTROY_VPORT_ERR)\
-   STATE(IDPF_VC_CONFIG_TXQ)   \
-   STATE(IDPF_VC_CONFIG_TXQ_ERR)   \
-   STATE(IDPF_VC_CONFIG_RXQ)   \
-   STATE(IDPF_VC_CONFIG_RXQ_ERR)   \
-   STATE(IDPF_VC_ENA_QUEUES)   \
-   STATE(IDPF_VC_ENA_QUEUES_ERR)   \
-   STATE(IDPF_VC_DIS_QUEUES)   \
-   STATE(IDPF_VC_DIS_QUEUES_ERR)   \
-   STATE(IDPF_VC_MAP_IRQ)  \
-   STATE(IDPF_VC_MAP_IRQ_ERR)  \
-   STATE(IDPF_VC_UNMAP_IRQ)\
-   STATE(IDPF_VC_UNMAP_IRQ_ERR)\
-   STATE(IDPF_VC_ADD_QUEUES)   \
-   STATE(IDPF_VC_ADD_QUEUES_ERR)   \
-   STATE(IDPF_VC_DEL_QUEUES)   \
-   STATE(IDPF_VC_DEL_QUEUES_ERR)   \
-   STATE(IDPF_VC_ALLOC_VECTORS)\
-   STATE(IDPF_VC_ALLOC_VECTORS_ERR)\
-   STATE(IDPF_VC_DEALLOC_VECTORS)  \
-   STATE(IDPF_VC_DEALLOC_VECTORS_ERR)  \
-   STATE(IDPF_VC_SET_SRIOV_VFS)\
-   STATE(IDPF_VC_SET_SRIOV_VFS_ERR)\
-   STATE(IDPF_VC_GET_RSS_LUT)  \
-   STATE(IDPF_VC_GET_RSS_LUT_ERR)  \
-   STATE(IDPF_VC_SET_RSS_LUT)  \
-   STATE(IDPF_VC_SET_RSS_LUT_ERR)  \
-   STATE(IDPF_VC_GET_RSS_KEY)  \
-   STATE(IDPF_VC_GET_RSS_KEY_ERR)  \
-   STATE(IDPF_VC_SET_RSS_KEY)  \
-   STATE(IDPF_VC_SET_RSS_KEY_ERR)  \
-   STATE(IDPF_VC_GET_STATS)\
-   STATE(IDPF_VC_GET_STATS_ERR)\
-   STATE(IDPF_VC_ADD_MAC_ADDR) \
-   STATE(IDPF_VC_ADD_MAC_ADDR_ERR) \
-   STATE(IDPF_VC_DEL_MAC_ADDR) \
-   STATE(IDPF_VC_DEL_MAC_ADDR_ERR) \
-   STATE(IDPF_VC_GET_PTYPE_INFO)   \
-   STATE(IDPF_VC_GET_PTYPE_INFO_ERR)   \
-   STATE(IDPF_VC_LOOPBACK_STATE)   \
-   STATE(IDPF_VC_LOOPBACK_STATE_ERR)   \
-   STATE(IDPF_VC_NBITS)
-
-#define IDPF_GEN_ENUM(ENUM) ENUM,
-#define IDPF_GEN_STRING(STRING) #STRING,
-
-enum idpf_vport_vc_state {
-   IDPF_FOREACH_VPORT_VC_STATE(IDPF_GEN_ENUM)
-};
-
-extern const char * const idpf_vport_vc_state_str[];
-
 /**
  * enum idpf_vport_reset_cause - Vport soft reset causes
  * @IDPF_SR_Q_CHANGE: Soft reset queue change
@@ -451,11 +384,7 @@ struct idpf_vc_xn_manager {
  * @port_stats: per port csum, header split, and other offload stats
  * @link_up: True if link is up
  * @link_speed_mbps: Link speed in mbps
- * @vc_msg: Virtchnl message buffer
- * @vc_state: Virtchnl message state
- * @vchnl_wq: Wait queue for virtchnl messages
  * @sw_marker_wq: workqueue for marker packets
- * @vc_buf_lock: Lock to protect virtchnl buffer
  */
 struct idpf_vport {
u16 num_txq;
@@ -501,12 +430,7 @@ struct idpf_vport {
bool link_up;
u32 link_speed_mbps;
 
-   char vc_msg

[Intel-wired-lan] [PATCH RESEND iwl-next 0/2] ice: Introduce switch recipe reusing

2024-01-29 Thread Steven Zou
This patch series firstly fix the bitmap casting issue in getting/mapping
recipe to profile association that is used in both legacy and recipe
reusing mode, then introduce switch recipe reusing feature as new E810
firmware supports the corresponding functionality. 

Steven Zou (2):
  ice: Refactor FW data type and fix bitmap casting issue
  ice: Add switch recipe reusing feature

 .../net/ethernet/intel/ice/ice_adminq_cmd.h   |   5 +-
 drivers/net/ethernet/intel/ice/ice_common.c   |   2 +
 drivers/net/ethernet/intel/ice/ice_lag.c  |   4 +-
 drivers/net/ethernet/intel/ice/ice_switch.c   | 211 +++---
 drivers/net/ethernet/intel/ice/ice_switch.h   |   5 +-
 drivers/net/ethernet/intel/ice/ice_type.h |   2 +
 6 files changed, 197 insertions(+), 32 deletions(-)


base-commit: f2aae2089368d8e5abb4ec01542c31bc4d011a74
-- 
2.31.1



[Intel-wired-lan] [PATCH RESEND iwl-next 1/2] ice: Refactor FW data type and fix bitmap casting issue

2024-01-29 Thread Steven Zou
According to the datasheet, the recipe association data is an 8-byte
little-endian value. It is described as 'Bitmap of the recipe indexes
associated with this profile', it is from 24 to 31 byte area in FW.
Therefore, it is defined to '__le64 recipe_assoc' in struct
ice_aqc_recipe_to_profile. And then fix the bitmap casting issue, as we
must never ever use castings for bitmap type.

Reviewed-by: Przemek Kitszel 
Reviewed-by: Andrii Staikov 
Reviewed-by: Jan Sokolowski 
Reviewed-by: Simon Horman 
Signed-off-by: Steven Zou 
---
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   |  3 ++-
 drivers/net/ethernet/intel/ice/ice_lag.c  |  4 ++--
 drivers/net/ethernet/intel/ice/ice_switch.c   | 24 +++
 drivers/net/ethernet/intel/ice/ice_switch.h   |  4 ++--
 4 files changed, 20 insertions(+), 15 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h 
b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index 2c703e95fb0d..5df50a0b7867 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -602,8 +602,9 @@ struct ice_aqc_recipe_data_elem {
 struct ice_aqc_recipe_to_profile {
__le16 profile_id;
u8 rsvd[6];
-   DECLARE_BITMAP(recipe_assoc, ICE_MAX_NUM_RECIPES);
+   __le64 recipe_assoc;
 };
+static_assert(sizeof(struct ice_aqc_recipe_to_profile) == 16);
 
 /* Add/Update/Remove/Get switch rules (indirect 0x02A0, 0x02A1, 0x02A2, 0x02A3)
  */
diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c 
b/drivers/net/ethernet/intel/ice/ice_lag.c
index 467372d541d2..a7a342809935 100644
--- a/drivers/net/ethernet/intel/ice/ice_lag.c
+++ b/drivers/net/ethernet/intel/ice/ice_lag.c
@@ -2041,7 +2041,7 @@ int ice_init_lag(struct ice_pf *pf)
/* associate recipes to profiles */
for (n = 0; n < ICE_PROFID_IPV6_GTPU_IPV6_TCP_INNER; n++) {
err = ice_aq_get_recipe_to_profile(&pf->hw, n,
-  (u8 *)&recipe_bits, NULL);
+  &recipe_bits, NULL);
if (err)
continue;
 
@@ -2049,7 +2049,7 @@ int ice_init_lag(struct ice_pf *pf)
recipe_bits |= BIT(lag->pf_recipe) |
   BIT(lag->lport_recipe);
ice_aq_map_recipe_to_profile(&pf->hw, n,
-(u8 *)&recipe_bits, NULL);
+recipe_bits, NULL);
}
}
 
diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c 
b/drivers/net/ethernet/intel/ice/ice_switch.c
index f84bab80ca42..ba0ef91e4c19 100644
--- a/drivers/net/ethernet/intel/ice/ice_switch.c
+++ b/drivers/net/ethernet/intel/ice/ice_switch.c
@@ -2025,12 +2025,12 @@ ice_update_recipe_lkup_idx(struct ice_hw *hw,
  * ice_aq_map_recipe_to_profile - Map recipe to packet profile
  * @hw: pointer to the HW struct
  * @profile_id: package profile ID to associate the recipe with
- * @r_bitmap: Recipe bitmap filled in and need to be returned as response
+ * @r_assoc: Recipe bitmap filled in and need to be returned as response
  * @cd: pointer to command details structure or NULL
  * Recipe to profile association (0x0291)
  */
 int
-ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
+ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u64 r_assoc,
 struct ice_sq_cd *cd)
 {
struct ice_aqc_recipe_to_profile *cmd;
@@ -2042,7 +2042,7 @@ ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 
profile_id, u8 *r_bitmap,
/* Set the recipe ID bit in the bitmask to let the device know which
 * profile we are associating the recipe to
 */
-   memcpy(cmd->recipe_assoc, r_bitmap, sizeof(cmd->recipe_assoc));
+   cmd->recipe_assoc = cpu_to_le64(r_assoc);
 
return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
 }
@@ -2051,12 +2051,12 @@ ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 
profile_id, u8 *r_bitmap,
  * ice_aq_get_recipe_to_profile - Map recipe to packet profile
  * @hw: pointer to the HW struct
  * @profile_id: package profile ID to associate the recipe with
- * @r_bitmap: Recipe bitmap filled in and need to be returned as response
+ * @r_assoc: Recipe bitmap filled in and need to be returned as response
  * @cd: pointer to command details structure or NULL
  * Associate profile ID with given recipe (0x0293)
  */
 int
-ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
+ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u64 *r_assoc,
 struct ice_sq_cd *cd)
 {
struct ice_aqc_recipe_to_profile *cmd;
@@ -2069,7 +2069,7 @@ ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 
profile_id, u8 *r_bitmap,
 
status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
if (!status)
-   memcpy(r_bitmap, cmd-

[Intel-wired-lan] [PATCH RESEND iwl-next 2/2] ice: Add switch recipe reusing feature

2024-01-29 Thread Steven Zou
New E810 firmware supports the corresponding functionality, so the driver
allows PFs to subscribe the same switch recipes. Then when the PF is done
with a switch recipes, the PF can ask firmware to free that switch recipe.

When users configure a rule to PFn into E810 switch component, if there is
no existing recipe matching this rule's pattern, the driver will request
firmware to allocate and return a new recipe resource for the rule by
calling ice_add_sw_recipe() and ice_alloc_recipe(). If there is an existing
recipe matching this rule's pattern with different key value, or this is a
same second rule to PFm into switch component, the driver checks out this
recipe by calling ice_find_recp(), the driver will tell firmware to share
using this same recipe resource by calling ice_subscribable_recp_shared()
and ice_subscribe_recipe().

When firmware detects that all subscribing PFs have freed the switch
recipe, firmware will free the switch recipe so that it can be reused.

This feature also fixes a problem where all switch recipes would eventually
be exhausted because switch recipes could not be freed, as freeing a shared
recipe could potentially break other PFs that were using it.

Reviewed-by: Przemek Kitszel 
Reviewed-by: Andrii Staikov 
Signed-off-by: Steven Zou 
---
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   |   2 +
 drivers/net/ethernet/intel/ice/ice_common.c   |   2 +
 drivers/net/ethernet/intel/ice/ice_switch.c   | 187 --
 drivers/net/ethernet/intel/ice/ice_switch.h   |   1 +
 drivers/net/ethernet/intel/ice/ice_type.h |   2 +
 5 files changed, 177 insertions(+), 17 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h 
b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index 5df50a0b7867..b315c734455a 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -273,6 +273,8 @@ struct ice_aqc_set_port_params {
 #define ICE_AQC_RES_TYPE_FLAG_SHARED   BIT(7)
 #define ICE_AQC_RES_TYPE_FLAG_SCAN_BOTTOM  BIT(12)
 #define ICE_AQC_RES_TYPE_FLAG_IGNORE_INDEX BIT(13)
+#define ICE_AQC_RES_TYPE_FLAG_SUBSCRIBE_SHARED BIT(14)
+#define ICE_AQC_RES_TYPE_FLAG_SUBSCRIBE_CTLBIT(15)
 
 #define ICE_AQC_RES_TYPE_FLAG_DEDICATED0x00
 
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c 
b/drivers/net/ethernet/intel/ice/ice_common.c
index d04a057f53fe..090a2b8b5ff2 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -1146,6 +1146,8 @@ int ice_init_hw(struct ice_hw *hw)
if (status)
goto err_unroll_fltr_mgmt_struct;
mutex_init(&hw->tnl_lock);
+   ice_init_chk_recipe_reuse_support(hw);
+
return 0;
 
 err_unroll_fltr_mgmt_struct:
diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c 
b/drivers/net/ethernet/intel/ice/ice_switch.c
index ba0ef91e4c19..53dd64768035 100644
--- a/drivers/net/ethernet/intel/ice/ice_switch.c
+++ b/drivers/net/ethernet/intel/ice/ice_switch.c
@@ -2074,6 +2074,18 @@ ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 
profile_id, u64 *r_assoc,
return status;
 }
 
+/**
+ * ice_init_chk_recipe_reuse_support - check if recipe reuse is supported
+ * @hw: pointer to the hardware structure
+ */
+void ice_init_chk_recipe_reuse_support(struct ice_hw *hw)
+{
+   struct ice_nvm_info *nvm = &hw->flash.nvm;
+
+   hw->recp_reuse = (nvm->major == 0x4 && nvm->minor >= 0x30) ||
+nvm->major > 0x4;
+}
+
 /**
  * ice_alloc_recipe - add recipe resource
  * @hw: pointer to the hardware structure
@@ -2083,12 +2095,16 @@ int ice_alloc_recipe(struct ice_hw *hw, u16 *rid)
 {
DEFINE_FLEX(struct ice_aqc_alloc_free_res_elem, sw_buf, elem, 1);
u16 buf_len = __struct_size(sw_buf);
+   u16 res_type;
int status;
 
sw_buf->num_elems = cpu_to_le16(1);
-   sw_buf->res_type = cpu_to_le16((ICE_AQC_RES_TYPE_RECIPE <<
-   ICE_AQC_RES_TYPE_S) |
-   ICE_AQC_RES_TYPE_FLAG_SHARED);
+   res_type = FIELD_PREP(ICE_AQC_RES_TYPE_M, ICE_AQC_RES_TYPE_RECIPE);
+   if (hw->recp_reuse)
+   res_type |= ICE_AQC_RES_TYPE_FLAG_SUBSCRIBE_SHARED;
+   else
+   res_type |= ICE_AQC_RES_TYPE_FLAG_SHARED;
+   sw_buf->res_type = cpu_to_le16(res_type);
status = ice_aq_alloc_free_res(hw, sw_buf, buf_len,
   ice_aqc_opc_alloc_res);
if (!status)
@@ -2097,6 +2113,70 @@ int ice_alloc_recipe(struct ice_hw *hw, u16 *rid)
return status;
 }
 
+/**
+ * ice_free_recipe_res - free recipe resource
+ * @hw: pointer to the hardware structure
+ * @rid: recipe ID to free
+ *
+ * Return: 0 on success, and others on error
+ */
+static int ice_free_recipe_res(struct ice_hw *hw, u16 rid)
+{
+   return ice_free_hw_res(hw, ICE_AQC_RES_TYPE_RECIPE, 1, &r

Re: [Intel-wired-lan] [PATCH net-next RESENT v3] ethtool: ice: Support for RSS settings to GTP from ethtool

2024-01-29 Thread takeru hayasaka
Hi Marcin-san
Thanks for your review!

> Do I understand correctly that all gtpu* include TEID? Maybe write it here.
Yes, that's correct.

> It would be nice to see a link to the patch that added GTP and 'e' flag 
> support
to ethtool itself ("ethtool: add support for rx-flow-hash gtp").
I will send you the link.
The one I sent earlier was outdated, so I've updated it to match this patch.
https://lore.kernel.org/netdev/20240130053742.946517-1-hayatake...@gmail.com/

> gtpc(4|6) doesn't include TEID, so what is its purpose?
In GTPC communication, there is no TEID in the CSR (Create Session Request).
Therefore, there are cases of GTPC that do not include TEID.

> s/TEID(4byte)/TEID (4bytes)/
> Also, I think two newlines should remain here.
I will correct the TEID notation in the next patch!

> I'm a bit confused... I thought GTPC_V4_FLOW doesn't have TEID, and
> GTPC_TEID_V4_FLOW does. So why do we set ICE_FLOW_SEG_HDR_GTPC_TEID here, and
> why is GTPC_TEID_V4_FLOW missing from this switch case? It also seems to be
> missing from ice_parse_hash_flds().
Oh, sorry about that! It seems I accidentally included the wrong thing
during the rebase process.
I will correct it in the next patch.

> What do you mean by "from before"? The GTP* defines above?
I apologize for the unclear writing.
By "the difference from before," I specifically meant the difference
from GTPU_EH_V(4|6)_FLOW.
The PSC included in the Extension Header distinguishes between UL/DL.
By using this option, it becomes possible to process including UL/DL.

Marcin-san, thank you very much for your feedback! I will reflect your
comments in the next patch.

Takeru

2024年1月29日(月) 23:23 Marcin Szycik :

>
>
>
> On 27.01.2024 15:07, Takeru Hayasaka wrote:
> > This is a patch that enables RSS functionality for GTP packets using 
> > ethtool.
> >
> > A user can include her TEID and make RSS work for GTP-U over IPv4 by doing 
> > the following:
> > `ethtool -N ens3 rx-flow-hash gtpu4 sde`
> >
> > In addition to gtpu(4|6), we now support 
> > gtpc(4|6),gtpc(4|6)t,gtpu(4|6)e,gtpu(4|6)u, and gtpu(4|6)d.
> >
> > gtpc(4|6): Used for GTP-C in IPv4 and IPv6, where the GTP header format 
> > does not include a TEID.
> > gtpc(4|6)t: Used for GTP-C in IPv4 and IPv6, with a GTP header format that 
> > includes a TEID.
> > gtpu(4|6): Used for GTP-U in both IPv4 and IPv6 scenarios.
> > gtpu(4|6)e: Used for GTP-U with extended headers in both IPv4 and IPv6.
> > gtpu(4|6)u: Used when the PSC (PDU session container) in the GTP-U extended 
> > header includes Uplink, applicable to both IPv4 and IPv6.
> > gtpu(4|6)d: Used when the PSC in the GTP-U extended header includes 
> > Downlink, for both IPv4 and IPv6.
>
> Do I understand correctly that all gtpu* include TEID? Maybe write it here.
>
> > GTP generates a flow that includes an ID called TEID to identify the 
> > tunnel. This tunnel is created for each UE (User Equipment).By performing 
> > RSS based on this flow, it is possible to apply RSS for each communication 
> > unit from the UE.Without this, RSS would only be effective within the range 
> > of IP addresses.
> > For instance, the PGW can only perform RSS within the IP range of the 
> > SGW.Problematic from a load distribution perspective, especially if there's 
> > a bias in the terminals connected to a particular base station.This case 
> > can be solved by using this patch.
>
> It would be nice to see a link to the patch that added GTP and 'e' flag 
> support
> to ethtool itself ("ethtool: add support for rx-flow-hash gtp").
>
> > Signed-off-by: Takeru Hayasaka 
> > ---
> > (I have resent the submission after making revisions based on Paul's 
> > advice.)
> > Sorry for the delay.
> > I've been swamped with other work and fell behind.
> > Since Harald has been supportive of the basic structure in the previous 
> > patch review, I've kept it largely unchanged but added some comments and 
> > documentation.
> > I would appreciate it if you could review it again.
>
> Please wrap all text to 80 columns and add missing spaces after ',' and '.'.
>
> >  .../device_drivers/ethernet/intel/ice.rst | 23 --
> >  drivers/net/ethernet/intel/ice/ice_ethtool.c  | 74 +++
> >  drivers/net/ethernet/intel/ice/ice_flow.h | 22 ++
> >  drivers/net/ethernet/intel/ice/ice_lib.c  | 37 ++
> >  include/uapi/linux/ethtool.h  | 41 ++
> >  5 files changed, 192 insertions(+), 5 deletions(-)
> >
> > diff --git a/Documentation/networking/device_drivers/ethernet/intel/ice.rst 
> > b/Documentation/networking/device_drivers/ethernet/intel/ice.rst
> > index 5038e54586af..6bc1c6f10617 100644
> > --- a/Documentation/networking/device_drivers/ethernet/intel/ice.rst
> > +++ b/Documentation/networking/device_drivers/ethernet/intel/ice.rst
> > @@ -368,16 +368,29 @@ more options for Receive Side Scaling (RSS) hash byte 
> > configuration.
> ># ethtool -N  rx-flow-hash  
> >
> >Where  is:
> > -tcp4  signifying TCP over IPv4
> > -   

[Intel-wired-lan] [tnguy-net-queue:1GbE] BUILD SUCCESS bbc404d20d1b46d89b461918bc44587620eda200

2024-01-29 Thread kernel test robot
tree/branch: 
https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue.git 1GbE
branch HEAD: bbc404d20d1b46d89b461918bc44587620eda200  ixgbe: Fix an error 
handling path in ixgbe_read_iosf_sb_reg_x550()

elapsed time: 727m

configs tested: 190
configs skipped: 3

The following configs have been built successfully.
More configs may be tested in the coming days.

tested configs:
alpha allnoconfig   gcc  
alphaallyesconfig   gcc  
alpha   defconfig   gcc  
arc  allmodconfig   gcc  
arc   allnoconfig   gcc  
arc  allyesconfig   gcc  
arc  axs103_defconfig   gcc  
arc defconfig   gcc  
arc   randconfig-001-20240130   gcc  
arc   randconfig-002-20240130   gcc  
arcvdk_hs38_smp_defconfig   gcc  
arm  allmodconfig   gcc  
arm   allnoconfig   gcc  
arm  allyesconfig   gcc  
arm am200epdkit_defconfig   clang
arm axm55xx_defconfig   gcc  
arm defconfig   clang
arm  ixp4xx_defconfig   clang
arm  jornada720_defconfig   gcc  
arm lpc18xx_defconfig   gcc  
arm   milbeaut_m10v_defconfig   clang
armmulti_v5_defconfig   clang
arm nhk8815_defconfig   gcc  
arm pxa_defconfig   gcc  
arm   randconfig-001-20240130   gcc  
arm   randconfig-002-20240130   gcc  
arm   randconfig-003-20240130   gcc  
arm   randconfig-004-20240130   gcc  
armrealview_defconfig   gcc  
arm s3c6400_defconfig   gcc  
arm   spear13xx_defconfig   clang
arm vf610m4_defconfig   gcc  
arm64allmodconfig   clang
arm64 allnoconfig   gcc  
arm64   defconfig   gcc  
arm64 randconfig-001-20240130   gcc  
arm64 randconfig-002-20240130   gcc  
arm64 randconfig-003-20240130   gcc  
arm64 randconfig-004-20240130   gcc  
csky allmodconfig   gcc  
csky  allnoconfig   gcc  
csky allyesconfig   gcc  
cskydefconfig   gcc  
csky  randconfig-001-20240130   gcc  
csky  randconfig-002-20240130   gcc  
hexagon  allmodconfig   clang
hexagon   allnoconfig   clang
hexagon  allyesconfig   clang
hexagon defconfig   clang
hexagon   randconfig-001-20240130   clang
hexagon   randconfig-002-20240130   clang
i386 allmodconfig   clang
i386  allnoconfig   clang
i386 allyesconfig   clang
i386 buildonly-randconfig-001-20240130   gcc  
i386 buildonly-randconfig-002-20240130   gcc  
i386 buildonly-randconfig-003-20240130   gcc  
i386 buildonly-randconfig-004-20240130   gcc  
i386 buildonly-randconfig-005-20240130   gcc  
i386 buildonly-randconfig-006-20240130   gcc  
i386defconfig   gcc  
i386  randconfig-001-20240130   gcc  
i386  randconfig-002-20240130   gcc  
i386  randconfig-003-20240130   gcc  
i386  randconfig-004-20240130   gcc  
i386  randconfig-005-20240130   gcc  
i386  randconfig-006-20240130   gcc  
i386  randconfig-011-20240130   clang
i386  randconfig-012-20240130   clang
i386  randconfig-013-20240130   clang
i386  randconfig-014-20240130   clang
i386  randconfig-015-20240130   clang
i386  randconfig-016-20240130   clang
loongarchallmodconfig   gcc  
loongarch allnoconfig   gcc  
loongarchallyesconfig   gcc  
loongarch   defconfig   gcc  
loongarch randconfig-001-20240130   gcc  
loongarch randconfig-002-20240130   gcc  
m68k allmodconfig   gcc  
m68k  allnoconfig   gcc  
m68k allyesconfig   gcc  
m68kdefconfig   gcc  
m68km5307c3_defconfig   gcc  
microblaze   alldefconfig   gcc  
microblaze  

[Intel-wired-lan] [tnguy-next-queue:dev-queue] BUILD SUCCESS f2aae2089368d8e5abb4ec01542c31bc4d011a74

2024-01-29 Thread kernel test robot
tree/branch: 
https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git dev-queue
branch HEAD: f2aae2089368d8e5abb4ec01542c31bc4d011a74  igc: Unify filtering 
rule fields

elapsed time: 729m

configs tested: 163
configs skipped: 3

The following configs have been built successfully.
More configs may be tested in the coming days.

tested configs:
alpha allnoconfig   gcc  
alphaallyesconfig   gcc  
alpha   defconfig   gcc  
arc  allmodconfig   gcc  
arc   allnoconfig   gcc  
arc  allyesconfig   gcc  
arc defconfig   gcc  
arc   randconfig-001-20240130   gcc  
arc   randconfig-002-20240130   gcc  
arm  allmodconfig   gcc  
arm   allnoconfig   gcc  
arm  allyesconfig   gcc  
arm defconfig   clang
arm   randconfig-001-20240130   gcc  
arm   randconfig-002-20240130   gcc  
arm   randconfig-003-20240130   gcc  
arm   randconfig-004-20240130   gcc  
arm64allmodconfig   clang
arm64 allnoconfig   gcc  
arm64   defconfig   gcc  
arm64 randconfig-001-20240130   gcc  
arm64 randconfig-002-20240130   gcc  
arm64 randconfig-003-20240130   gcc  
arm64 randconfig-004-20240130   gcc  
csky allmodconfig   gcc  
csky  allnoconfig   gcc  
csky allyesconfig   gcc  
cskydefconfig   gcc  
csky  randconfig-001-20240130   gcc  
csky  randconfig-002-20240130   gcc  
hexagon  allmodconfig   clang
hexagon   allnoconfig   clang
hexagon  allyesconfig   clang
hexagon defconfig   clang
hexagon   randconfig-001-20240130   clang
hexagon   randconfig-002-20240130   clang
i386 allmodconfig   clang
i386  allnoconfig   clang
i386 allyesconfig   clang
i386 buildonly-randconfig-001-20240130   gcc  
i386 buildonly-randconfig-002-20240130   gcc  
i386 buildonly-randconfig-003-20240130   gcc  
i386 buildonly-randconfig-004-20240130   gcc  
i386 buildonly-randconfig-005-20240130   gcc  
i386 buildonly-randconfig-006-20240130   gcc  
i386defconfig   gcc  
i386  randconfig-001-20240130   gcc  
i386  randconfig-002-20240130   gcc  
i386  randconfig-003-20240130   gcc  
i386  randconfig-004-20240130   gcc  
i386  randconfig-005-20240130   gcc  
i386  randconfig-006-20240130   gcc  
i386  randconfig-011-20240130   clang
i386  randconfig-012-20240130   clang
i386  randconfig-013-20240130   clang
i386  randconfig-014-20240130   clang
i386  randconfig-015-20240130   clang
i386  randconfig-016-20240130   clang
loongarchallmodconfig   gcc  
loongarch allnoconfig   gcc  
loongarch   defconfig   gcc  
loongarch randconfig-001-20240130   gcc  
loongarch randconfig-002-20240130   gcc  
m68k allmodconfig   gcc  
m68k  allnoconfig   gcc  
m68k allyesconfig   gcc  
m68kdefconfig   gcc  
microblaze   allmodconfig   gcc  
microblazeallnoconfig   gcc  
microblaze   allyesconfig   gcc  
microblaze  defconfig   gcc  
mips  allnoconfig   clang
mips allyesconfig   gcc  
nios2allmodconfig   gcc  
nios2 allnoconfig   gcc  
nios2allyesconfig   gcc  
nios2   defconfig   gcc  
nios2 randconfig-001-20240130   gcc  
nios2 randconfig-002-20240130   gcc  
openrisc  allnoconfig   gcc  
openrisc allyesconfig   gcc  
openriscdefconfig   gcc  
parisc   allmodconfig   gcc  
pariscallnoconfig   gcc  
parisc   allyesconfig   gcc  
parisc  defconfi