Hi All, We've been using Flink 1.3.2 for a while now, but recently failed to deploy our fat jar to the cluster. The deployment only works when we remove 2 arbitrary operators, thus giving us the impression our job is too large. However, we only changed some case classes and serializers (to support Avro) compared to a working version of our jar. I'll provide some context below.
*Streaming operators used: *(same list as when deploy worked) - 9 Incoming streams from Kafka (all parsed from JSON -> Case Classes) - 6 Stateful Joins (extend CoProcessFunction) - 4 Stateful Processors (extend ProcessFunction) - 5 Maps - 2 Filters - 1 Union of 3 Streams - 1 Sink to Kafka (Case class -> JSON) *Changes made:* - add extended Type Serializer for Avro support - add companion objects to case classes for translation to Avro Generic Records - alter state full functions to use above changes *what does work:* - remove 2 arbitrary operators and deploy fat jar - run full program using sbt run locally Could it be that somehow the complexity causes the job deploy as jar to fail? We simply get a timeout from Flinks CLI when trying to deploy, even when extending the timeout to several minutes. Any help would be very much appreciated! Thanks, Niels -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/