yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1889825476


##########
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/DateTimeUtils.java:
##########
@@ -365,6 +368,38 @@ public static TimestampData toTimestampData(double v, int 
precision) {
         }
     }
 
+    public static TimestampData toTimestampData(int v, int precision) {
+        switch (precision) {
+            case 0:
+                if (MIN_EPOCH_SECONDS <= v && v <= MAX_EPOCH_SECONDS) {
+                    return timestampDataFromEpochMills((v * 
MILLIS_PER_SECOND));
+                } else {
+                    return null;
+                }
+            case 3:
+                return timestampDataFromEpochMills(v);
+            default:
+                throw new TableException(
+                        "The precision value '"
+                                + precision
+                                + "' for function "
+                                + "TO_TIMESTAMP_LTZ(numeric, precision) is 
unsupported,"
+                                + " the supported value is '0' for second or 
'3' for millisecond.");
+        }
+    }
+
+    public static TimestampData toTimestampData(long epoch) {
+        return toTimestampData(epoch, DEFAULT_PRECISION);
+    }
+
+    public static TimestampData toTimestampData(double epoch) {
+        return toTimestampData(epoch, DEFAULT_PRECISION);
+    }
+
+    public static TimestampData toTimestampData(DecimalData epoch) {
+        return toTimestampData(epoch, DEFAULT_PRECISION);
+    }

Review Comment:
   Hi @snuyanzin , you asked a great question. I spent quite some time on this, 
and I think I figured it out. 
   
   In this ticket, we aim to ensure that the existing Scala tests pass to 
confirm that the function's existing behavior remains unchanged, therefore we 
still depend on some scala generated code. In an ideal world, we will have 
scala only take care of existing behaviors, and the new java function takes 
care of the new behaviors, but we can't have both stacks running at the same 
time. This requires my modifications to the Scala folder. If I don't modify 
these folders, my new java tests fail because they can't pick up the new 
behaviors.
   
   I attempted to get rid of the Scala tech stack, but that caused the existing 
Scala tests to fail. 
   
   Once the new functionality is out and stable, we should complete the 
migration by removing the Scala tests and fully transition to Java, but now I 
think we should keep it as is, so that we have the confidence that we don't 
break existing tests. What do you think?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to