[ 
https://issues.apache.org/jira/browse/BEAM-14255?focusedWorklogId=772035&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-772035
 ]

ASF GitHub Bot logged work on BEAM-14255:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 18/May/22 17:05
            Start Date: 18/May/22 17:05
    Worklog Time Spent: 10m 
      Work Description: TheNeuralBit commented on code in PR #17671:
URL: https://github.com/apache/beam/pull/17671#discussion_r876139262


##########
sdks/python/apache_beam/ml/inference/base_test.py:
##########
@@ -133,14 +133,14 @@ def test_timing_metrics(self):
             MetricsFilter().with_name('inference_batch_latency_micro_secs')))

Review Comment:
   well my confusion is that you had to change the assertion from 3000 to 
3000000, indicating that the units on this metric changed.





Issue Time Tracking
-------------------

    Worklog Id:     (was: 772035)
    Time Spent: 2h 10m  (was: 2h)

> Drop the clock abastraction and just use time.time for time measurements
> ------------------------------------------------------------------------
>
>                 Key: BEAM-14255
>                 URL: https://issues.apache.org/jira/browse/BEAM-14255
>             Project: Beam
>          Issue Type: Sub-task
>          Components: sdk-py-core
>            Reporter: Ryan Thompson
>            Assignee: Ryan Thompson
>            Priority: P2
>          Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Right now the TFX-BSL Runinference library uses an abstract clock class to 
> get microsecond precision, but time.time should give an adequate precision.
>  
> Investigate removing the clock abstraction and just using time.time.
>  
> Alternatively, comment why the abstraction is useful.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to