Thanks Kailash for bringing this up. I think this is a good idea. By
passing the ParquetWriter we gain much more flexibility.
I did a small PR on adding the ability to add compression to the Parquet
writer: https://github.com/apache/flink/pull/7547 But I believe this is the
wrong approach. For exa
+1. Sounds good to me.
-Jakob
On Mon, Apr 22, 2019 at 9:00 AM Kailash Dayanand
wrote:
>
> Friendly remainder. Any thoughts on this approach ?
>
> On Tue, Apr 9, 2019 at 11:36 AM Kailash Dayanand wrote:
>
> > cc'ing few folks who are interested in this discussion.
> >
> > On Tue, Apr 9, 2019 at
Friendly remainder. Any thoughts on this approach ?
On Tue, Apr 9, 2019 at 11:36 AM Kailash Dayanand wrote:
> cc'ing few folks who are interested in this discussion.
>
> On Tue, Apr 9, 2019 at 11:35 AM Kailash Dayanand
> wrote:
>
>> Hello,
>>
>> I am looking to contribute a ProtoParquetWriter s
Hello,
I am looking to contribute a ProtoParquetWriter support which can be used
in Bulk format for the StreamingFileSink api. There has been earlier
discussions on this in the user mailing list: https://goo.gl/ya2StL and
thought it would be a good addition to have.
For implementation, looking at
cc'ing few folks who are interested in this discussion.
On Tue, Apr 9, 2019 at 11:35 AM Kailash Dayanand wrote:
> Hello,
>
> I am looking to contribute a ProtoParquetWriter support which can be used
> in Bulk format for the StreamingFileSink api. There has been earlier
> discussions on this in t