[ 
https://issues.apache.org/jira/browse/FLINK-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-8307:
----------------------------------
    Component/s:     (was: Core)
                 Runtime / Web Frontend

> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandler logging improvments
> -----------------------------------------------------------------------------
>
>                 Key: FLINK-8307
>                 URL: https://issues.apache.org/jira/browse/FLINK-8307
>             Project: Flink
>          Issue Type: Improvement
>          Components: Runtime / Web Frontend
>    Affects Versions: 1.4.0
>            Reporter: Colin Williams
>            Priority: Minor
>
> I need to set 
> log4j.logger.org.apache.flink.runtime.webmonitor.RuntimeMonitorHandler=DEBUG 
> to get insight into jobs which won't start through the web-ui. Furthermore, 
> when I set this I get information about a job that can't be found, 
> perpetually in my logs.
> From my perspective the information regarding a job that won't start from the 
> web-ui should be INFO level. Also I think it would be good to report the job 
> that can't be found at the INFO level, and furthermore it would be nice to 
> only report this information once to not overflow the logs...
>  ```
> 2017-12-22 00:02:03,139 DEBUG 
> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandler     - Error while 
> handling request.
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.rest.NotFoundException: Could not find job 
> cde261e3623e1b3e3d8ce09bfc838b6e.
>         at 
> org.apache.flink.runtime.rest.handler.legacy.AbstractExecutionGraphRequestHandler.lambda$handleJsonRequest$0(AbstractExecutionGraphRequestHandler.java:70)
>         at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>         at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
>         at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>         at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
>         at 
> org.apache.flink.runtime.rest.handler.legacy.ExecutionGraphCache.lambda$getExecutionGraph$0(ExecutionGraphCache.java:130)
>         at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>         at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>         at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>         at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
>         at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:444)
>         at akka.dispatch.OnComplete.internal(Future.scala:259)
>         at akka.dispatch.OnComplete.internal(Future.scala:256)
>         at akka.dispatch.japi$CallbackBridge.apply(Future.scala:186)
>         at akka.dispatch.japi$CallbackBridge.apply(Future.scala:183)
>         at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>         at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:84)
>         at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>         at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>         at scala.concurrent.Promise$class.complete(Promise.scala:55)
>         at 
> scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
>         at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
>         at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>         at 
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
>         at 
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
>         at 
> scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
>         at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>         at 
> scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
>         at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
>         at 
> scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
>         at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
>         at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>         at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>         at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:534)
>         at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist$$anonfun$handleMessage$1.applyOrElse(MemoryArchivist.scala:123)
>         at 
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
>         at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>         at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>         at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>         at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>         at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist.aroundReceive(MemoryArchivist.scala:65)
>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>         at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>         at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>         at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>         at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.runtime.rest.NotFoundException: Could not find 
> job cde261e3623e1b3e3d8ce09bfc838b6e.
>         ... 53 more
> ```



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to