> >
> >>>> Add PM QoS configuration to declease the delay after sleep in case of
> >>>> entering deeper idle state.
> >>>>
> >>>> Signed-off-by: Huisong Li <lihuis...@huawei.com>
> >>>> Acked-by: Morten Brørup <m...@smartsharesystems.com>
> >>>> ---
> >>>>    examples/l3fwd-power/main.c | 24 ++++++++++++++++++++++++
> >>>>    1 file changed, 24 insertions(+)
> >>>>
> >>>> diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
> >>>> index 2bb6b092c3..b0ddb54ee2 100644
> >>>> --- a/examples/l3fwd-power/main.c
> >>>> +++ b/examples/l3fwd-power/main.c
> >>>> @@ -47,6 +47,7 @@
> >>>>    #include <rte_telemetry.h>
> >>>>    #include <rte_power_pmd_mgmt.h>
> >>>>    #include <rte_power_uncore.h>
> >>>> +#include <rte_power_qos.h>
> >>>>
> >>>>    #include "perf_core.h"
> >>>>    #include "main.h"
> >>>> @@ -2260,6 +2261,22 @@ init_power_library(void)
> >>>>                          return -1;
> >>>>                  }
> >>>>          }
> >>>> +
> >>>> +        RTE_LCORE_FOREACH(lcore_id) {
> >>>> +                /*
> >>>> +                 * Set the worker lcore's to have strict latency limit 
> >>>> to allow
> >>>> +                 * the CPU to enter the shallowest idle state.
> >>>> +                 */
> >>>> +                ret = rte_power_qos_set_cpu_resume_latency(lcore_id,
> >>>> +                                        
> >>>> RTE_POWER_QOS_STRICT_LATENCY_VALUE);
> >>> I wonder why it is set to all worker cores silently and unconditionally?
> >>> Wouldn't it be a change from current behavior of the power library?
> >> L3fwd-power uses Rx interrupt to receive packet.
> > AFAIK, not exactly.
> >  From what I remember l3fwd-power still runs RX in poll mode,
> > thought it counts number of idle rx bursts.
> > As that number goes above some threshold, it puts itself into
> > sleep with some timeout value.
> Exactly.
> >
> >> Do you mean this
> >> setting should be for the core of Rx queue, right?
> >> This setting doesn't change the behavior of l3fwd-power. It is just for
> >> getting low resume latency when worker core wakes up from sleeping.
> > As understand your patch - you force CPU to select more shallow C state
> > when entering such sleep.
> > Then it means that possible packet loss will be smaller,
> > but power consumption probably higher, correct?
> correct.
> > If so, then it looks like a change from current  behavior for that app,
> > and we probably need to document what will be an expected change.
> > Or probably as a better way - provider user with a way to choose,
> > new cmdline option or so.
> Yes.
> The power consumption may increase but the performance is better due to
> this patch if the platform enables cpuidle funtion.

Yes, that what I expect, and personally I am ok with that.
Though I suspect different users who use this sample as some test-app
might have different priorities in that tradeoff (power vs performance).

> After all, this is just a very little point. It is enough to document
> this change or impact in doc of this API. Just let it more clear for user.
> What do you think?

I think yes, probably just updating docs (rel-notes, SG ?) will be enough.
David Hunt, what are your thoughts here?

> >
> >>>> +                if (ret != 0) {
> >>>> +                        RTE_LOG(ERR, L3FWD_POWER,
> >>>> +                                "Failed to set strict resume latency on 
> >>>> core%u.\n",
> >>>> +                                lcore_id);
> >>>> +                        return ret;
> >>>> +                }
> >>>> +        }
> >>>> +
> >>>>          return ret;
> >>>>    }
> >>>>
> >>>> @@ -2299,6 +2316,13 @@ deinit_power_library(void)
> >>>>                          }
> >>>>                  }
> >>>>          }
> >>>> +
> >>>> +        RTE_LCORE_FOREACH(lcore_id) {
> >>>> +                /* Restore the original value in kernel. */
> >>>> +                rte_power_qos_set_cpu_resume_latency(lcore_id,
> >>>> +                                
> >>>> RTE_POWER_QOS_RESUME_LATENCY_NO_CONSTRAINT);
> >>>> +        }
> >>>> +
> >>>>          return ret;
> >>>>    }
> >>>>
> >>>> --
> >>>> 2.22.0

Reply via email to