> -----Original Message-----
> From: Thomas Monjalon <tho...@monjalon.net>
> Sent: Wednesday, June 24, 2020 11:50 PM
> To: Singh, Jasvinder <jasvinder.si...@intel.com>
> Cc: Dumitrescu, Cristian <cristian.dumitre...@intel.com>;
> 'alangordonde...@gmail.com' <alangordonde...@gmail.com>;
> dev@dpdk.org; 'Alan Dewar' <alan.de...@att.com>; Dewar, Alan
> <alan.de...@intl.att.com>
> Subject: Re: [dpdk-dev] [PATCH] sched: fix port time rounding error
>
> Jasvinder, what is the conclusion of this patch?
>
> 21/04/2020 10:21, Dewar, Alan:
> > From: Singh, Jasvinder <jasvinder.si...@intel.com>
> > > > > From: Alan Dewar <alan.de...@att.com>
> > > > >
> > > > > The QoS scheduler works off port time that is computed from the
> > > > > number of CPU cycles that have elapsed since the last time the port
> was
> > > > > polled. It divides the number of elapsed cycles to calculate how
> > > > > many bytes can be sent, however this division can generate
> > > > > rounding errors, where some fraction of a byte sent may be lost.
> > > > >
> > > > > Lose enough of these fractional bytes and the QoS scheduler
> > > > > underperforms. The problem is worse with low bandwidths.
> > > > >
> > > > > To compensate for this rounding error this fix doesn't advance
> > > > > the port's time_cpu_cycles by the number of cycles that have
> > > > > elapsed, but by multiplying the computed number of bytes that
> > > > > can be sent (which has been rounded down) by number of cycles per
> byte.
> > > > > This will mean that port's time_cpu_cycles will lag behind the
> > > > > CPU cycles momentarily. At the next poll, the lag will be taken
> > > > > into account.
> > > > >
> > > > > Fixes: de3cfa2c98 ("sched: initial import")
> > > > >
> > > > > Signed-off-by: Alan Dewar <alan.de...@att.com>
> > > > > ---
> > > > > lib/librte_sched/rte_sched.c | 12 ++++++++++--
> > > > > 1 file changed, 10 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/lib/librte_sched/rte_sched.c
> > > > > b/lib/librte_sched/rte_sched.c index c0983ddda..c656dba2d 100644
> > > > > --- a/lib/librte_sched/rte_sched.c
> > > > > +++ b/lib/librte_sched/rte_sched.c
> > > > > @@ -222,6 +222,7 @@ struct rte_sched_port {
> > > > > uint64_t time_cpu_bytes; /* Current CPU time measured in
> > > > > bytes
> > > > > */
> > > > > uint64_t time; /* Current NIC TX time measured
> > > > > in bytes */
> > > > > struct rte_reciprocal inv_cycles_per_byte; /* CPU cycles per
> > > > > byte */
> > > > > + uint64_t cycles_per_byte;
> > > > >
> > > > > /* Grinders */
> > > > > struct rte_mbuf **pkts_out;
> > > > > @@ -852,6 +853,7 @@ rte_sched_port_config(struct
> > > > rte_sched_port_params
> > > > > *params)
> > > > > cycles_per_byte = (rte_get_tsc_hz() << RTE_SCHED_TIME_SHIFT)
> > > > > / params->rate;
> > > > > port->inv_cycles_per_byte =
> > > > > rte_reciprocal_value(cycles_per_byte);
> > > > > + port->cycles_per_byte = cycles_per_byte;
> > > > >
> > > > > /* Grinders */
> > > > > port->pkts_out = NULL;
> > > > > @@ -2673,20 +2675,26 @@ static inline void
> > > > > rte_sched_port_time_resync(struct rte_sched_port *port) {
> > > > > uint64_t cycles = rte_get_tsc_cycles();
> > > > > - uint64_t cycles_diff = cycles - port->time_cpu_cycles;
> > > > > + uint64_t cycles_diff;
> > > > > uint64_t bytes_diff;
> > > > > uint32_t i;
> > > > >
> > > > > + if (cycles < port->time_cpu_cycles)
> > > > > + goto end;
> > >
> > > Above check seems redundant as port->time_cpu_cycles will always be
> less than the current cycles due to roundoff in previous iteration.
> > >
> >
> > This was to catch the condition where the cycles wraps back to zero (after
> 100+ years?? depending on clock speed).
> > Rather than just going to end: the conditional should at least reset port-
> >time_cpu_cycles back to zero.
> > So there would be a very temporary glitch in accuracy once every 100+
> years.
>
Alan, Could you please resubmit the patch with above change? Other than that,
patch looks good to me.