That sound right to me. Cheng could elaborate if you are missing something.
On Fri, Feb 13, 2015 at 11:36 AM, Manoj Samel
wrote:
> Thanks Michael for the pointer & Sorry for the delayed reply.
>
> Taking a quick inventory of scope of change - Is the column type for
> Decimal caching needed only
Hi Manoj,
Yes, you've already hit the point. I think timestamp type support in the
in-memory columnar support can be a good reference for you. Also, you may
want to enable compression support for decimal type by adding DECIMAL
column type to RunLengthEncoding.supports and DictionaryEncoding.suppor
Thanks Michael for the pointer & Sorry for the delayed reply.
Taking a quick inventory of scope of change - Is the column type for
Decimal caching needed only in the caching layer (4 files
in org.apache.spark.sql.columnar - ColumnAccessor.scala,
ColumnBuilder.scala, ColumnStats.scala, ColumnType.s