This is reasonable ... +1
On Sun, Jul 30, 2017 at 2:19 AM, Sean Owen wrote:
> The project had traditionally posted some guidance about upcoming
> releases. The last release cycle was about 6 months. What about penciling
> in December 2017 for 2.3.0? http://spark.apache.org/versioning-policy.htm
I'm trying to support parquet i/o for data-frames that contain a UDT (for
t-digests). The UDT is defined here:
https://github.com/erikerlandson/isarn-sketches-spark/blob/feature/pyspark/src/main/scala/org/apache/spark/isarnproject/sketches/udt/TDigestUDT.scala#L37
I can read and write using 'obje
+1
Bests,
Dongjoon
On Sun, Jul 30, 2017 at 02:20 Sean Owen wrote:
> The project had traditionally posted some guidance about upcoming
> releases. The last release cycle was about 6 months. What about penciling
> in December 2017 for 2.3.0? http://spark.apache.org/versioning-policy.html
>
The project had traditionally posted some guidance about upcoming releases.
The last release cycle was about 6 months. What about penciling in December
2017 for 2.3.0? http://spark.apache.org/versioning-policy.html