Hi,

I've been looking at the SPARK-17845 and I am curious if there is any
reason to make it a breaking change. In Spark 2.0 and below we could use:

    Window().partitionBy("foo").orderBy("bar").rowsBetween(-sys.maxsize,
sys.maxsize))

In 2.1.0 this code will silently produce incorrect results (ROWS BETWEEN
-1 PRECEDING AND UNBOUNDED FOLLOWING) Couldn't we use
Window.unboundedPreceding equal -sys.maxsize to ensure backward
compatibility?

-- 

Maciej Szymkiewicz


---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to