Seems like a bug we should fix? I agree some form of truncation makes more
sense.
On Thu, Jun 1, 2017 at 1:17 AM, Anton Okolnychyi wrote:
> Hi all,
>
> I would like to ask what the community thinks regarding the way how Spark
> handles nanoseconds in the Timestamp type.
>
> As far as I see in t
Again (I've probably said this more than 10 times already in different
threads), SPARK-18350 has no impact on whether the timestamp type is with
timezone or without timezone. It simply allows a session specific timezone
setting rather than having Spark always rely on the machine timezone.
On Wed,
Yea I don't see why this needs to be per table config. If the user wants to
configure it per table, can't they just declare the data type on a per
table basis, once we have separate types for timestamp w/ tz and w/o tz?
On Thu, Jun 1, 2017 at 4:14 PM, Michael Allman wrote:
> I would suggest that
I would suggest that making timestamp type behavior configurable and persisted
per-table could introduce some real confusion, e.g. in queries involving tables
with different timestamp type semantics.
I suggest starting with the assumption that timestamp type behavior is a
per-session flag that
Hi all,
I would like to ask what the community thinks regarding the way how Spark
handles nanoseconds in the Timestamp type.
As far as I see in the code, Spark assumes microseconds precision.
Therefore, I expect to have a truncated to microseconds timestamp or an
exception if I specify a timestam