Hey,

I have brought up this issue a couple months back but I would like to do it
again.

I think the current way of validating the input type of udfs against the
out type of the preceeding operators is too aggressive and breaks a lot of
code that should otherwise work.

This issue appears all the time when I want to use my own
TypeInformations<> for operators such as creating my own Tuple typeinfos
with custom types for the different fields and so.

I have a more complex streaming job which would not run if I have the input
type validation. Replacing the Exceptions with logging my Job runs
perfectly (making my point) but you can see the errors that would have been
reported as exceptions in the logs:

2016-03-02 11:06:03,447 ERROR
org.apache.flink.api.java.typeutils.TypeExtractor - Input mismatch: Generic
object type ‘mypackage.TestEvent' expected but was ‘mypackage.Event’.
2016-03-02 11:06:03,450 ERROR
org.apache.flink.api.java.typeutils.TypeExtractor - Input mismatch: Unknown
Error. Type is null.
2016-03-02 11:06:03,466 ERROR
org.apache.flink.api.java.typeutils.TypeExtractor - Input mismatch: Basic
type expected.
2016-03-02 11:06:03,470 ERROR
org.apache.flink.api.java.typeutils.TypeExtractor - Input mismatch: Basic
type expected.

Clearly all these errors where not valid in my case as my job runs
perfectly.

Would it make sense to change the current behaviour or am I just abusing
the .returns(..) and ResultTypeQueryable interfaces in unintended ways.

Cheers,
Gyula

Reply via email to