t;
> Am Mo., 23. Sept. 2019 um 11:40 Uhr schrieb ShuQi <mailto:shuyun...@163.com>>:
> Hi Guys,
>
> The Flink version is 1.9.0. I use OrcTableSource to read ORC file in HDFS and
> the job is executed successfully, no any exception or error. But some
> fields(such as tagIndustry)
Hi Guys,
Flink version is 1.9.0 and built against HDP.
I got the following exceptions when submitting a job using Hadoop input to read
sequence file in hdfs.
Thanks for your help!
Qi
The program finished with the following exceptio
ute
(MojoExecutor.java:351)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute
(MojoExecutor.java:215)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute
(MojoExecutor.java:171)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute
(MojoExecutor.java:163)
Hi,Yuxin Tan:
Thank you very much. My problem has been resolved.
Best,
Zbz
You can take a look at the document. [
https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/config/#rocksdb-native-metrics
]
Thanks,
Zbz
> 2024年4月7日 13:41,Lei Wang 写道:
>
>
> Using big state and want to do some performance tuning, how can i enable
> RocksDB native metri
请教一个问题。在使用k8s 部署的flink 集群,如果jobmanger 重启后,1)job所在的jar包会清除,jobmanager
找不到这个job的jar 包,2)正在运行的job也会取消,重启后的jobmanager 如何找到之前运行的job
dty...@163.com
e2 -> Sink: Unnamed (1/1)
(GeneralRedisSinkFunction.invoke:169) - receive data(false,0,86,20200417)
2020-04-17 22:28:39,328INFO groupBy xx -> to: Tuple2 -> Sink: Unnamed (1/1)
(GeneralRedisSinkFunction.invoke:169) - receive data(true,0,131,20200417)
我们使用的是1.7.2, 测试作业的并行度为1。
这是对应的 issue
@Benchao @Jark
thank you very much. We have use flink 1.9 for a while , and we will try 1.9 +
minibatch.
dixingxin...@163.com
Sender: Jark Wu
Send Time: 2020-04-18 21:38
Receiver: Benchao Li
cc: dixingxing85; user; user-zh
Subject: Re: Flink streaming sql是否支持两层group by聚合
Hi,
I will use
大佬们,我现在flink的版本是flink 1.10,但是我通过-ynm 指定yarn上的任务名称不起作用,一直显示的是Flink per-job
cluster
lxk7...@163.com
For flink1.12.1 version, set taskmanager.memory.process.size: 1024m.
When running, Heap Maximum: 146M, Non-Heap Maximum: 744 MB, Heap usage rate
is about 10%-30%.
What is the reasonable Heap usage rate? So as to do further resource
optimization.
--
Sent from: http://apache-flink-user-mailing-li
hi, i met a strange issue, the same code running in a java class can consume
kafka , but when i change the java class to a spring bean(annotation is
@service) , the program can not consume kafka amymore. does anyone met the
similar problems or how can i debug this problems? thanks a lot
hi
i have a flink sql, reading record from kafka, then use table function do some
transformation, then produce to kafka.
i have found that in the flink web record received of the first subTask is
always 0 ,and the Records send of the last subTask is 0 as well.
i want to count how many r
treamInternal(StreamTableEnvironment.scala:398)
at
org.apache.flink.table.api.scala.StreamTableEnvironment.fromDataStream(StreamTableEnvironment.scala:85)
at
org.apache.flink.table.api.scala.DataStreamConversions.toTable(DataStreamConversions.scala:58)
Thanks.
laney0...@163.com
退订
14 matches
Mail list logo