Hi, I cannot spot anything bad or „wrong“ about your job configuration. Maybe you can try to save and send the logs if it happens again? Did you observe this only once, often, or is it something that is even reproduceable?
Best, Stefan > Am 24.09.2018 um 10:15 schrieb PedroMrChaves <pedro.mr.cha...@gmail.com>: > > Hello Stefan, > > Thank you for the help. > > I've actually lost those logs to due several cluster restarts that we did, > which cause log rotation up (limit = 5 versions). > Those log lines that i've posted were the only ones that showed signs of > some problem. > > *The configuration of the job is as follows:* > > / private static final int DEFAULT_MAX_PARALLELISM = 16; > private static final int CHECKPOINTING_INTERVAL = 1000; > private static final int MIN_PAUSE_BETWEEN_CHECKPOINTS = 1000; > private static final int CHECKPOINT_TIMEOUT = 60000; > private static final int INTERVAL_BETWEEN_RESTARTS = 120; > (...) > > environment.setStreamTimeCharacteristic(TimeCharacteristic.EventTime); > environment.setMaxParallelism(DEFAULT_MAX_PARALLELISM); > environment.enableCheckpointing(CHECKPOINTING_INTERVAL, > CheckpointingMode.EXACTLY_ONCE); > > environment.getCheckpointConfig().setMinPauseBetweenCheckpoints(MIN_PAUSE_BETWEEN_CHECKPOINTS); > > environment.getCheckpointConfig().setCheckpointTimeout(CHECKPOINT_TIMEOUT); > environment.setRestartStrategy(RestartStrategies.noRestart()); > environment.setParallelism(parameters.getInt(JOB_PARALLELISM));/ > * > the kafka consumer/producer configuration is:* > / > properties.put("value.deserializer", > "org.apache.kafka.common.serialization.StringDeserializer"); > properties.put("max.request.size","1579193"); > properties.put("processing.guarantee","exactly_once"); > properties.put("isolation.level","read_committed");/ > > > > ----- > Best Regards, > Pedro Chaves > -- > Sent from: > http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/