> I think the answer is to use a DataflowClient in the second service, but
creating one requires DataflowPipelineOptions. Are these options supposed
to be exactly the same as those used by the first service? Or do only some
of the fields have to be the same?

Most options are not necessary for retrieving a job. In general, Dataflow
jobs can always be uniquely identified by the project, region and job ID.
https://github.com/apache/beam/blob/ecedd3e654352f1b51ab2caae0fd4665403bd0eb/runners/google-cloud-dataflow-java/src/main/java/org/apache/beam/runners/dataflow/DataflowClient.java#L100

On Mon, Oct 12, 2020 at 9:31 AM Peter Littig <plit...@nianticlabs.com>
wrote:

> Hello, Beam users!
>
> Suppose I want to build two (Java) services, one that launches
> (long-running) dataflow jobs, and the other that monitors the status of
> dataflow jobs. Within a single service, I could simply track a
> PipelineResult for each dataflow run and periodically call getState. How
> can I monitor job status like this from a second, independent service?
>
> I think the answer is to use a DataflowClient in the second service, but
> creating one requires DataflowPipelineOptions. Are these options supposed
> to be exactly the same as those used by the first service? Or do only some
> of the fields have to be the same?
>
> Or maybe there's a better alternative than DataflowClient?
>
> Thanks in advance!
>
> Peter
>

Reply via email to