Nice news. Congrats!
Leonard Xu wrote:
Congratulations!
Thanks Zhijiang and Piotr for the great work, and thanks everyone involved!
Best,
Leonard Xu
Hi
on 2019/9/11 17:22, Till Rohrmann wrote:
I'm very happy to announce that Zili Chen (some of you might also know
him as Tison Kun) accepted the offer of the Flink PMC to become a
committer of the Flink project.
Congratulations Zili Chen.
regards.
on 2019/9/11 16:17, Stephan Ewen wrote:
We still maintain connectors for Kafka 0.8 and 0.9 in Flink.
I would suggest to drop those with Flink 1.10 and start supporting only
Kafka 0.10 onwards.
Are there any concerns about this, or still a significant number of
users of these versions?
Bu
On 2019/9/8 5:40 下午, Anyang Hu wrote:
In flink1.9, is there a way to read local json file in Flink SQL like
the reading of csv file?
hi,
might this thread help you?
http://mail-archives.apache.org/mod_mbox/flink-dev/201604.mbox/%3cCAK+0a_o5=c1_p3sylrhtznqbhplexpb7jg_oq-sptre2neo...@mail.gmail.
On 2019/9/6 8:55 下午, Fabian Hueske wrote:
I'm very happy to announce that Kostas Kloudas is joining the Flink PMC.
Kostas is contributing to Flink for many years and puts lots of effort
in helping our users and growing the Flink community.
Please join me in congratulating Kostas!
congratulat
Hi
on 2019/9/4 19:30, liu ze wrote:
I use the row_number() over() function to do topN, the total amount of
data is 60,000, and the state is 12G .
Finally, oom, is there any way to optimize it?
ref:
https://stackoverflow.com/questions/50812837/flink-taskmanager-out-of-memory-and-memory-config
hadoop.apache.org/docs/stable/api/org/apache/hadoop/mapreduce/lib/output/FileOutputFormat.html
>
> Let me know if you need more information.
>
> Regards,
> Robert
>
> [1]:
> https://ci.apache.org/projects/flink/flink-docs-master/apis/batch/hadoop_compatibility.html
&g
Hello -
Forgive me if this has been asked before, but I'm trying to determine the
best way to add compression to DataSink Outputs (starting with
TextOutputFormat). Realistically I would like each partition file (based
on parallelism) to be compressed independently with gzip, but am open to
other