Hi guys
I am trying the DDL feature in branch 1.9-releasae. I am stucked in creating a
table from kafka with nested json format. Is it possibe to specify a "Row" type
of columns to derive the nested json schema?
String sql = "create table kafka_stream(\n" +
" a varchar, \n" +
" b varchar,\n"
Hi everyone!
I found that everytime I start a flink-yarn application, client will ship
flink-uber jar and other dependencies to hdfs and start appMaster. Is there any
approaches to locate flink-uber jar and other library jars on hdfs and let only
configuration file being shipped. Therefore the y
https://issues.apache.org/jira/browse/FLINK-13938
Shengnan YU mailto:ysna...@hotmail.com>> 于2019年9月16日周一
下午2:24写道:
Hi everyone!
I found that everytime I start a flink-yarn application, client will ship
flink-uber jar and other dependencies to hdfs and start appMaster. Is there any
approach
Hi all:
I'd like to enable log rolling for flink on yarn. I tried to modify
log4j.properties and logback.xml in flink/conf however there still one
"taskmanager.log" in yarn container log. Any idea with that? Thank you very
much!
Hi everyone:
Recently I am doing a bucketingsink (to hdfs) job by flink on yarn. However I
found that if the yarn session crashed, or I manually killed the yarn session,
the file on hdfs does not rename to .pending state and the latest checkpoint
does not have _metadata. Therefore I cannot resu
Hi everyone!
Is it a good way to start a yarn session programmatically for some flink jobs
under kerbores? Thank you very much!
Hi guys
Any good ideas to achieve exactly once BucketingSink for text file?truncating
compressed binary file will corrupt the gzip file which means I need to -text
that gzip and redirect to a text file then compressed it agan and finally
upload to hdfs. Its really inefficient. Any other compress