[ 
https://issues.apache.org/jira/browse/BEAM-14255?focusedWorklogId=771974&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771974
 ]

ASF GitHub Bot logged work on BEAM-14255:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 18/May/22 15:33
            Start Date: 18/May/22 15:33
    Worklog Time Spent: 10m 
      Work Description: TheNeuralBit commented on code in PR #17671:
URL: https://github.com/apache/beam/pull/17671#discussion_r876045980


##########
sdks/python/apache_beam/ml/inference/base_test.py:
##########
@@ -133,14 +133,14 @@ def test_timing_metrics(self):
             MetricsFilter().with_name('inference_batch_latency_micro_secs')))

Review Comment:
   I think this metric should be renamed if we're changing the precision



##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -150,27 +155,24 @@ def update(
 
 class _RunInferenceDoFn(beam.DoFn):
   """A DoFn implementation generic to frameworks."""
-  def __init__(self, model_loader: ModelLoader, clock=None):
+  def __init__(self, model_loader: ModelLoader, clock):
     self._model_loader = model_loader
     self._inference_runner = model_loader.get_inference_runner()
     self._shared_model_handle = shared.Shared()
     self._metrics_collector = _MetricsCollector(
         self._inference_runner.get_metrics_namespace())
     self._clock = clock
-    if not clock:
-      self._clock = _ClockFactory.make_clock()
     self._model = None
 
   def _load_model(self):
     def load():
       """Function for constructing shared LoadedModel."""
       memory_before = _get_current_process_memory_in_bytes()
-      start_time = self._clock.get_current_time_in_microseconds()
+      start_time = _to_milliseconds(self._clock.time())
       model = self._model_loader.load_model()
-      end_time = self._clock.get_current_time_in_microseconds()
+      end_time = _to_milliseconds(self._clock.time())

Review Comment:
   A couple questions here:
   - why change the units to milliseconds?
   - maybe we should use 
[time_ns](https://docs.python.org/3/library/time.html#time.time_ns) instead? 
The time docs say time_ns should be preferred to avoid precision loss. It's py 
3.7+ only, but we've dropped support for 3.6 now





Issue Time Tracking
-------------------

    Worklog Id:     (was: 771974)
    Time Spent: 1h 50m  (was: 1h 40m)

> Drop the clock abastraction and just use time.time for time measurements
> ------------------------------------------------------------------------
>
>                 Key: BEAM-14255
>                 URL: https://issues.apache.org/jira/browse/BEAM-14255
>             Project: Beam
>          Issue Type: Sub-task
>          Components: sdk-py-core
>            Reporter: Ryan Thompson
>            Assignee: Ryan Thompson
>            Priority: P2
>          Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Right now the TFX-BSL Runinference library uses an abstract clock class to 
> get microsecond precision, but time.time should give an adequate precision.
>  
> Investigate removing the clock abstraction and just using time.time.
>  
> Alternatively, comment why the abstraction is useful.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to