Hello Reynold, I truly appreciate your time and attention to this feature. For 
the performance, here are my thoughts:

* As Serge mentioned above, Apache Spark needs to be aligned with other 
competitive products. We should not overlook potential benefits just because of 
performance regression. 

* Based on current chats from above google doc, I plan to add a new DataType, 
allowing users to specify the precision, with the default remaining at 
microseconds to minimize impact. 

* I will also include performance benchmarks as the subtask to help users 
understand the trade-offs when increasing precision.

On 2025/03/17 18:54:07 Reynold Xin wrote:
> Any thoughts on how to deal with performance here? Initially we didn't do
> nano level precision because of performance (would not be able to fit
> everything into a 64 bit int).
> 
> On Mon, Mar 17, 2025 at 11:34 AM Sakthi <sa...@apache.org> wrote:
> 
> > +1 (non-binding)
> >
> > On Mon, Mar 17, 2025 at 11:32 AM Zhou Jiang <zh...@gmail.com>
> > wrote:
> >
> >> +1 for the nanosecond support
> >>
> >>
> >> > On Mar 16, 2025, at 16:03, Dongjoon Hyun <do...@apache.org> wrote:
> >> >
> >> > +1 for supporting NanoSecond Timestamps.
> >> >
> >> > Thank you, Qi.
> >> >
> >> > Dongjoon.
> >> >
> >> > ---------------------------------------------------------------------
> >> > To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
> >> >
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
> >>
> >>
>  


Reply via email to