Then let me provide a PR so that we can discuss an alternative way
2017-06-02 8:26 GMT+02:00 Reynold Xin :
> Seems like a bug we should fix? I agree some form of truncation makes more
> sense.
>
>
> On Thu, Jun 1, 2017 at 1:17 AM, Anton Okolnychyi <
> anton.okolnyc...@gmail.com> wrote:
>
>> Hi al
Seems like a bug we should fix? I agree some form of truncation makes more
sense.
On Thu, Jun 1, 2017 at 1:17 AM, Anton Okolnychyi wrote:
> Hi all,
>
> I would like to ask what the community thinks regarding the way how Spark
> handles nanoseconds in the Timestamp type.
>
> As far as I see in t
Hi all,
I would like to ask what the community thinks regarding the way how Spark
handles nanoseconds in the Timestamp type.
As far as I see in the code, Spark assumes microseconds precision.
Therefore, I expect to have a truncated to microseconds timestamp or an
exception if I specify a timestam