fsk119 commented on code in PR #26603: URL: https://github.com/apache/flink/pull/26603#discussion_r2117608943
########## flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/catalog/CatalogSchemaModel.java: ########## @@ -85,4 +109,34 @@ private static RelDataType schemaToRelDataType( .collect(Collectors.toList()); return typeFactory.buildRelNodeRowType(fieldNames, fieldTypes); } + + private ModelProvider createModelProvider( + FlinkContext context, ContextResolvedModel catalogModel) { + + final Optional<ModelProviderFactory> factoryFromCatalog = + catalogModel + .getCatalog() + .flatMap(Catalog::getFactory) + .map( + f -> + f instanceof ModelProviderFactory + ? (ModelProviderFactory) f + : null); + + final Optional<ModelProviderFactory> factoryFromModule = + context.getModuleManager().getFactory(Module::getModelProviderFactory); + + // Since the catalog is more specific, we give it precedence over a factory provided by any + // modules. + final ModelProviderFactory factory = + firstPresent(factoryFromCatalog, factoryFromModule).orElse(null); + + return FactoryUtil.createModelProvider( Review Comment: I think the actual provider should implement both interfaces and then let the framework to determine which actual function is used in the later stage. If the model developer prefer to split the implementation, I think we can convert SqlMLPredictTableFunction in total rather than convert ModelCall. ``` public SqlRexConvertlet get(SqlCall call) { ... if (isMLPredict(call)) { return this::convertSqlMLPredictFunction; } return StandardConvertletTable.INSTANCE.get(call); } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org