Please always try to include user@f.a.o in your reply, so other can participate in the discussion and learn from your findings.
I think Dominik has already given you pretty good hint. The JSON parsing in this case is not any different as with any other java application (with jackson / gson / ...). You can then simply split the parsed elements into good and bad records. D. On Wed, Dec 29, 2021 at 10:53 AM Siddhesh Kalgaonkar < kalgaonkarsiddh...@gmail.com> wrote: > Hi David, > > Thanks for the clarification. I will check the link you shared. Also, as > mentioned by Dominik, can you help me with the process functions. How can I > use it for my use case? > > Thanks, > Siddhesh > > On Wed, Dec 29, 2021 at 2:50 PM David Morávek <d...@apache.org> wrote: > >> Hi Siddhesh, >> >> it seems that the question is already being answered in the SO thread, so >> let's keep the discussion focused there. >> >> Looking at the original question, I think it's important to understand, >> that the TypeInformation is not meant to be used for "runtime" matching, >> but to address the type erasure [1] limitation for the UDFs (user defined >> functions), so Flink can pick the correct serializer / deserializer. >> >> [1] https://docs.oracle.com/javase/tutorial/java/generics/erasure.html >> >> Best, >> D. >> >> On Tue, Dec 28, 2021 at 9:21 PM Siddhesh Kalgaonkar < >> kalgaonkarsiddh...@gmail.com> wrote: >> >>> Hi Team, >>> >>> I am a newbie to Flink and Scala and trying my best to learn everything >>> I can. I doing a practice where I am getting incoming JSON data from the >>> Kafka topic and want to perform a data type check on it. >>> For that, I came across TypeInformation of Flink. Please read my problem >>> in detail from the below link: >>> >>> Flink Problem >>> <https://stackoverflow.com/questions/70500023/typeinformation-in-flink-to-compare-the-datatypes-dynamically> >>> >>> I went through the documentation but didn't come across any relevant >>> examples. Any suggestions would help. >>> >>> Looking forward to hearing from you. >>> >>> >>> Thanks, >>> Siddhesh >>> >>