There was a discussion a while ago where we decided that type resolution should 
decide which overload of the method was being called - and therefore the type 
that each argument needs to be converted to. And it should decide how to 
convert the argument to the parameter type. (I use the words ‘argument’ and 
‘parameter’ in their technical sense [1].)

One kind of conversion is widening, e.g. converting a SMALLINT argument to 
INTEGER parameter type. This is widening because every valid SMALLINT value 
maps to one INTEGER value.

Another kind of conversion is coercion, e.g. converting a CHAR or VARCHAR 
argument to a DATE parameter type. This is coercion because some strings (e.g. 
‘2024-09-23’ and ‘2024-9-23’) correspond to valid dates, and others (e.g. 
‘Hello’) do not.

I do not remember where that discussion ended up. I thought we’d end up 
implementing it by storing the ‘how to convert’ information in the SqlValidator 
along with the inferred parameter type.

Julian

PS Please use ‘[DISCUSS]’ rather than ‘Discussion:’ in the message subject. 
Standards are useful.

[1] https://www.educative.io/answers/parameter-vs-argument

> On Sep 21, 2024, at 9:35 AM, Cancai Cai <caic68...@gmail.com> wrote:
> 
> Hello, everyone in the calcite community.
> 
> I encountered a problem while dealing with
> https://issues.apache.org/jira/browse/CALCITE-6300. That is, when should
> the type conversions related to map and array be performed?
> 
> At present, I can think of two solutions. One is to convert the final
> result when returning the result, such as:
> https://issues.apache.org/jira/browse/CALCITE-5948,
> e.g:
> @LibraryOperator(libraries = {SPARK})
> public static final SqlFunction ARRAY_COMPACT =
> SqlBasicFunction.create(SqlKind.ARRAY_COMPACT,
> SqlLibraryOperators::arrayCompactReturnType,
> OperandTypes.ARRAY);
> 
> The second is to convert after parameter type verification, for example,
> https://issues.apache.org/jira/browse/CALCITE-6300
> @LibraryOperator(libraries = {SPARK})
> public static final SqlFunction MAP_VALUES =
> SqlBasicFunction.create(SqlKind.MAP_VALUES,
> ReturnTypes.TO_MAP_VALUES_NULLABLE,
> OperandTypes.MAP)
> .withOperandTypeInference(InferTypes.MAP_SPARK_FUNCTION_TYPE);
> 
> Perhaps these two methods are not suitable, because they are not universal
> solutions. I don’t know how the database defines this type conversion
> standard, but Calcite does not seem to have a clear standard for this
> conversion. But I think it should be solved because many functions in Spark
> need this conversion. (p.s. I have implemented the
> adjustTypeForMapFunctionConstructor api, but I don't know where to put it.)
> 
> More discussions can be viewed in jira:
> https://issues.apache.org/jira/browse/CALCITE-6300
> 
> If there are any mistakes in my description process, please correct me. For
> solutions, if others have better suggestions or other project code
> examples, please put them forward. I am happy to learn.
> 
> Sorry for asking this question so late
> 
> Best wishes,
> Cancai Cai

Reply via email to