Thanks for the reply.

However, I think my case differs because I am running a sequence of
independent Flink jobs on the same environment instance.
I only create the LocalExecutionEnvironment once.

The web manager shows the job ID changing correctly every time a new job is
executed.

Since it is the same execution environment (and therefore the same cluster
instance I imagine), those completed jobs should show as well, no?

On Wed, 5 Sep 2018 at 18:40, Chesnay Schepler <ches...@apache.org> wrote:

> When you create an environment that way, then the cluster is shutdown once
> the job completes.
> The WebUI can _appear_ as still working since all the files, and data
> about the job, is cached in the browser.
>
> On 05.09.2018 17:39, Miguel Coimbra wrote:
>
> Hello,
>
> I'm having difficulty reading the status (such as time taken for each
> dataflow operator in a job) of jobs that have completed.
>
> First, when I click on "Completed jobs" on the web interface (by default
> at 8081), no job shows up.
> I see jobs that exist as "Running", but as soon as they finish, I would
> expect them to appear in the "Complete jobs" section, but no luck.
>
> Consider that I am running locally (web UI is running, I checked and it is
> available via browser) on 8081.
> None of these links worked for checking jobs that have already finished,
> such as the job ID 618fac9da6ea458f5091a9c40e54cbcc that had been running:
>
> http://127.0.0.1:8081/jobs/618fac9da6ea458f5091a9c40e54cbcc
> http://127.0.0.1:8081/completed-jobs/618fac9da6ea458f5091a9c40e54cbcc
>
> I'm running with a LocalExecutionEnvironment with with the method:
>
> ExecutionEnvironment.createLocalEnvironmentWithWebUI(conf)
>
> I hope anyone may be able to help.
>
> Best,
>
>
>
>

Reply via email to