IMHO that’s not a good comparison. By that logic we shouldn’t have double 
because it’s slower than int.
We should compare against the competition first.

Maybe as part of this effort we’ll need to prototype two competing solutions.

The vast majority of differences should be related to storage cost. Few 
arithmetic operations will feel it.
After all there not many arithmetic operations defined on timestamp to begin 
with.


On Mar 17, 2025 at 3:03 PM -0700, Reynold Xin <r...@databricks.com>, wrote:
Pretty much anything (say vs current timestamp operations in Spark).

On Mon, Mar 17, 2025 at 2:51 PM serge rielau.com<http://rielau.com> 
<se...@rielau.com<mailto:se...@rielau.com>> wrote:
What are you comparing performance against?
On Mar 17, 2025 at 11:54 AM -0700, Reynold Xin <r...@databricks.com.INVALID>, 
wrote:
Any thoughts on how to deal with performance here? Initially we didn't do nano 
level precision because of performance (would not be able to fit everything 
into a 64 bit int).

On Mon, Mar 17, 2025 at 11:34 AM Sakthi 
<sak...@apache.org<mailto:sak...@apache.org>> wrote:
+1 (non-binding)

On Mon, Mar 17, 2025 at 11:32 AM Zhou Jiang 
<zhou.c.ji...@gmail.com<mailto:zhou.c.ji...@gmail.com>> wrote:
+1 for the nanosecond support


> On Mar 16, 2025, at 16:03, Dongjoon Hyun 
> <dongj...@apache.org<mailto:dongj...@apache.org>> wrote:
>
> +1 for supporting NanoSecond Timestamps.
>
> Thank you, Qi.
>
> Dongjoon.
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: 
> dev-unsubscr...@spark.apache.org<mailto:dev-unsubscr...@spark.apache.org>
>

---------------------------------------------------------------------
To unsubscribe e-mail: 
dev-unsubscr...@spark.apache.org<mailto:dev-unsubscr...@spark.apache.org>

Reply via email to