AHeise commented on code in PR #130: URL: https://github.com/apache/flink-connector-kafka/pull/130#discussion_r1820453815
########## flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/KafkaRecordSerializationSchemaBuilder.java: ########## @@ -369,5 +416,43 @@ public ProducerRecord<byte[], byte[]> serialize( value, headerProvider != null ? headerProvider.getHeaders(element) : null); } + + @Override + public Optional<KafkaDatasetFacet> getKafkaDatasetFacet() { + if (!(topicSelector instanceof KafkaDatasetIdentifierProvider)) { + LOG.warn("Cannot identify topics. Not an TopicsIdentifierProvider"); + return Optional.empty(); + } + + Optional<KafkaDatasetIdentifier> topicsIdentifier = + ((KafkaDatasetIdentifierProvider) (topicSelector)).getDatasetIdentifier(); + + if (!topicsIdentifier.isPresent()) { + LOG.warn("No topics' identifiers provided"); + return Optional.empty(); + } + + TypeInformation typeInformation; + if (this.valueSerializationSchema instanceof ResultTypeQueryable) { + typeInformation = + ((ResultTypeQueryable<?>) this.valueSerializationSchema).getProducedType(); + } else { + // gets type information from serialize method signature + typeInformation = Review Comment: Yes TypeInformationFacet sounds like a general concept. I'm convinced you want to pull it out of the KafkaFacet now. You probably want to name it "inputType" and "outputType" depending on the type of the connector (source/sink). I'd design it generally and pull it up into flink-core for Flink 2.0 later (so make it work in Kafka first and then propose to port it upwards). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org