What would be the timeline for the parquet caching work?

The reason I'm asking about the columnar compressed format is that
there are some problems for which Parquet is not practical.

On Mon, Aug 25, 2014 at 1:13 PM, Michael Armbrust
<mich...@databricks.com> wrote:
>> What is the plan for getting Tachyon/off-heap support for the columnar
>> compressed store?  It's not in 1.1 is it?
>
>
> It is not in 1.1 and there are not concrete plans for adding it at this
> point.  Currently, there is more engineering investment going into caching
> parquet data in Tachyon instead.  This approach is going to have much better
> support for nested data, leverages other work being done on parquet, and
> alleviates your concerns about wire format compatibility.
>
> That said, if someone really wants to try and implement it, I don't think it
> would be very hard.  The primary issue is going to be designing a clean
> interface that is not too tied to this one implementation.
>
>>
>> Also, how likely is the wire format for the columnar compressed data
>> to change?  That would be a problem for write-through or persistence.
>
>
> We aren't making any guarantees at the moment that it won't change.  Its
> currently only intended for temporary caching of data.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to