On 4/9/2025 5:55 AM, Olech, Milena wrote:
> On 4/8/2025 11:15 PM, Jacob Keller wrote:
> 
>> On 4/8/2025 3:30 AM, Milena Olech wrote:
>>> +static u64 idpf_ptp_read_src_clk_reg_direct(struct idpf_adapter *adapter,
>>> +                                       struct ptp_system_timestamp *sts)
>>> +{
>>> +   struct idpf_ptp *ptp = adapter->ptp;
>>> +   u32 hi, lo;
>>> +
>>> +   spin_lock(&ptp->read_dev_clk_lock);
>>> +
>>> +   /* Read the system timestamp pre PHC read */
>>> +   ptp_read_system_prets(sts);
>>> +
>>> +   idpf_ptp_enable_shtime(adapter);
>>> +

Aha, I see it now. You snapshot the time value here.

>>> +   /* Read the system timestamp post PHC read */
>>> +   ptp_read_system_postts(sts);
>>> +
>>> +   lo = readl(ptp->dev_clk_regs.dev_clk_ns_l);
>>> +   hi = readl(ptp->dev_clk_regs.dev_clk_ns_h);
>>> +

And this is just reading it out of the snapshot shadow registers. Ok.

>>> +   spin_unlock(&ptp->read_dev_clk_lock);
>>> +
>>> +   return ((u64)hi << 32) | lo;
>>> +}
>> v9 had comments regarding the latching of the registers for direct
>> access. Can you confirm whether this is known to be safe, or if you need
>> to implement a 3-part read like we do in ice and other hardware? Even
>> with a spinlock there could be issues with rollover in the hardware I think?
>>
> 
> So in this model we have shadow registers and we trigger HW - by writes
> executed in idpf_ptp_enable_shtime - to latch the value. I've made some
> experiments, where I removed this function call, and values in hi/lo
> registers are constantly the same.
> 
> In other words, it is safe to read values from hi/lo registers until
> the next latch.
> 
> To my best knowledge, ice does not have any HW support, that's why all
> these actions are required.
> 

Yep, ice doesn't have a snapshot like this, and neither does our old
hardware. This is much better: it improves the accuracy of the sts
values, and is simpler. Nice.

Now that I understand what the idpf_ptp_enable_shtime does its a lot
more clear.

Thanks!

> Milena
> 
>> Thanks,
>> Jake
>>

Reply via email to