[ 
https://issues.apache.org/jira/browse/BEAM-14255?focusedWorklogId=772030&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-772030
 ]

ASF GitHub Bot logged work on BEAM-14255:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 18/May/22 16:59
            Start Date: 18/May/22 16:59
    Worklog Time Spent: 10m 
      Work Description: ryanthompson591 commented on code in PR #17671:
URL: https://github.com/apache/beam/pull/17671#discussion_r876130046


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -150,27 +155,24 @@ def update(
 
 class _RunInferenceDoFn(beam.DoFn):
   """A DoFn implementation generic to frameworks."""
-  def __init__(self, model_loader: ModelLoader, clock=None):
+  def __init__(self, model_loader: ModelLoader, clock):
     self._model_loader = model_loader
     self._inference_runner = model_loader.get_inference_runner()
     self._shared_model_handle = shared.Shared()
     self._metrics_collector = _MetricsCollector(
         self._inference_runner.get_metrics_namespace())
     self._clock = clock
-    if not clock:
-      self._clock = _ClockFactory.make_clock()
     self._model = None
 
   def _load_model(self):
     def load():
       """Function for constructing shared LoadedModel."""
       memory_before = _get_current_process_memory_in_bytes()
-      start_time = self._clock.get_current_time_in_microseconds()
+      start_time = _to_milliseconds(self._clock.time())
       model = self._model_loader.load_model()
-      end_time = self._clock.get_current_time_in_microseconds()
+      end_time = _to_milliseconds(self._clock.time())

Review Comment:
   If you look at the line I removed 172, the previous behavior was to get 
start and end time in microseconds and then convert to milliseconds down below.
   
   For some reason the tensorflow team felt (and it seems right) that 
milliseconds was adequate for model loading, whereas microseconds were used for 
prediction measurement.
   
   I think it's possible model loading can take hundreds or thousands of 
milliseconds whereas a prediction might not take a single millisecond.



##########
sdks/python/apache_beam/ml/inference/base_test.py:
##########
@@ -133,14 +133,14 @@ def test_timing_metrics(self):
             MetricsFilter().with_name('inference_batch_latency_micro_secs')))

Review Comment:
   No this stays the same. I think you are mistaking the inference latency 

Issue Time Tracking
-------------------

    Worklog Id:     (was: 772030)
    Time Spent: 2h  (was: 1h 50m)

> Drop the clock abastraction and just use time.time for time measurements
> ------------------------------------------------------------------------
>
>                 Key: BEAM-14255
>                 URL: https://issues.apache.org/jira/browse/BEAM-14255
>             Project: Beam
>          Issue Type: Sub-task
>          Components: sdk-py-core
>            Reporter: Ryan Thompson
>            Assignee: Ryan Thompson
>            Priority: P2
>          Time Spent: 2h
>  Remaining Estimate: 0h
>
> Right now the TFX-BSL Runinference library uses an abstract clock class to 
> get microsecond precision, but time.time should give an adequate precision.
>  
> Investigate removing the clock abstraction and just using time.time.
>  
> Alternatively, comment why the abstraction is useful.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to