robreeves commented on code in PR #50269:
URL: https://github.com/apache/spark/pull/50269#discussion_r2035627896


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala:
##########
@@ -763,4 +763,29 @@ object DateTimeUtils extends SparkDateTimeUtils {
       throw QueryExecutionErrors.invalidDatetimeUnitError("TIMESTAMPDIFF", 
unit)
     }
   }
+
+  /**
+   * Converts separate time fields in a long that represents microseconds 
since the start of
+   * the day
+   * @param hours the hour, from 0 to 23
+   * @param minutes the minute, from 0 to 59
+   * @param secsAndMicros the second, from 0 to 59.999999
+   * @return Time time represented as microseconds since the start of the day
+   */
+  def timeToMicros(hours: Int, minutes: Int, secsAndMicros: Decimal): Long = {
+    assert(secsAndMicros.scale == 6,
+      s"Seconds fraction must have 6 digits for microseconds but got 
${secsAndMicros.scale}")
+    val unscaledSecFrac = secsAndMicros.toUnscaledLong
+    val totalMicros = unscaledSecFrac.toInt // 8 digits cannot overflow Int

Review Comment:
   This is existing logic that I chose to make consistent with `MakeTimestamp` 
[here](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala#L2833).
 I agree it seems like it should be 6 at max. I can refactor it so both classes 
share it like I originally had when MakeTime was in datetimeExpressions if you 
prefer to couple them together.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to