Hello!I'm reposting this since the other thread had some formatting issues apparently. I hope this time it works.I'm having performance problems with a Flink job. If there is anything valuable missing, please ask and I will try to answer ASAP. My job looks like this:First, I read data from Kafka. This is very fast at 100k msgs/s. The data is decoded, a type is added (we have multiple message types per Kafka topic). Then we select the TYPE_A messages, create a Scala entity out of if (a case class). Afterwards in the MapEntityToMultipleEntities the Scala entities are split into multiple. Finally a watermark is added.As you can see the data is not keyed in any way yet. *Is there a way to make this faster?*/Measurements were taken withand /I'm running this on a Intel i5-3470, 16G RAM, Ubuntu 16.04.1 LTS on Flink 1.1.4
-- View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Improving-Flink-Performance-tp11248.html Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.