回复: Skip Restore Incompatibility of Key Serializer for Savepoint

2025-03-16 Thread
file, replacing the old one, the new Job can restor state and running well. Best regards, leilinee 发件人: Gabor Somogyi 发送时间: 2025年2月10日 16:48 收件人: 李 琳 抄送: user@flink.apache.org 主题: Re: Skip Restore Incompatibility of Key Serializer for Savepoint Seems like I wa

退订

2025-02-19 Thread
退订

Re: Skip Restore Incompatibility of Key Serializer for Savepoint

2025-02-09 Thread
insights would be of great help to me in resolving this issue. Thank you very much in advance. Best regards,Leilinee 获取 Outlook for iOS<https://aka.ms/o0ukef> 发件人: Gabor Somogyi 发送时间: Saturday, February 8, 2025 4:58:48 PM 收件人: 李 琳 抄送: user@flink.apache

Skip Restore Incompatibility of Key Serializer for Savepoint

2025-02-07 Thread
Dear All, In a Flink job using statementSet, I have two DMLs for index calculation. I made a modification to one of the DMLs, changing the group keys from "distinct_id, $city, $province, window_start, window_end" to "distinct_id, window_start, window_end". When attempting to restore the job fro

flink connector hive support collect metrics

2024-11-06 Thread
Dear all, I am using the flink-connector-hive to sync records into hive table. However, the sink conector doesn't collect operator running metrics like numRecordsIn. so I couldn't collect these data. Is there a way to support the flink-connector-hive to collect flink metrics? Or is there some m

退订

2024-01-18 Thread
退订

Socket timeout when report metrics to pushgateway

2023-12-12 Thread
hello, we build flink report metrics to prometheus pushgateway, the program has been running for a period of time, with a amount of data reported to pushgateway, pushgateway response socket timeout exception, and much of metrics data reported failed. following is the exception: 2023-12-12 0

How to set hdfs configuration in flink kubernetes operator

2023-06-23 Thread
Hi all, Recently, I have been testing the Flink Kubernetes Operator. In the official example, the checkpoint/savepoint path is configured with a file system: state.savepoints.dir: file:///flink-data/savepoints state.checkpoints.dir: file:///flink-data/checkpoints high-availability: org.apache

Re: How to set hdfs configuration in flink kubernetes operator?

2023-06-22 Thread
ct the namenode. My main question is how to load the hdfs-core.xml file in the Flink Kubernetes operator. If you know how to do that, please let me know. I hope to receive your response via email. Thank you! 发件人: Dongwoo Kim 发送时间: Wednesday, June 21, 2023 7:56:52

How to set hdfs configuration in flink kubernetes operator?

2023-06-21 Thread
Hi all, Recently, I have been testing the Flink Kubernetes Operator. In the official example, the checkpoint/savepoint path is configured with a file system: state.savepoints.dir: file:///flink-data/savepoints state.checkpoints.dir: file:///flink-data/checkpoints high-availability: org.apache.

How's JobManager bring up TaskManager in Application Mode or Session Mode?

2022-11-28 Thread
Hi, How's JobManager bring up TaskManager in Application Mode or Session Mode? I can’t get it even after reading source code of flink operator? Any help will be appreciate, Thank you. Mark

Re:Re: Flink Operator in an off-line k8s enviroment

2022-11-22 Thread
Hi Geng Biao, I works for me, thank you. At 2022-11-22 23:16:41, "Geng Biao" wrote: Hi Mark, I guess you have to create your own local image registry service which your k8s cluster can connect to and upload the image of flink k8s operator to the service. After that, you

Re:Re: flink sql api, exception when setting "table.exec.state.ttl"

2022-05-25 Thread 诗君
k-1.0.0-SNAPSHOT.jar/flink-dist_2.12-1.13.5.jar); I'd suggest to resolve that first and see if the error persists. On 23/05/2022 14:32, 李诗君 wrote: flink version: 1.13.5 java code: StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); Env

flink sql api, exception when setting "table.exec.state.ttl"

2022-05-23 Thread 诗君
flink version: 1.13.5 java code: StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); EnvironmentSettings settings = EnvironmentSettings.newInstance() .useBlinkPlanner() .inStreamingMode() .build(); StreamTable

Question about Flink(Standalone)

2022-03-21 Thread 杰进13922030138
Hi~ I am Jiejin Li, a Flinkusers from china.I failed to submit a flink task using Flink Standalone mode with the following error: Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme &aposhdfs&apos. The scheme is

Re: stateSerializer(1.14.0) not compatible with previous stateSerializer(1.13.1)

2021-12-15 Thread 诗君
n > > On Fri, Dec 10, 2021 at 3:41 AM 李诗君 wrote: >> >> I am trying to upgrade my flink cluster version from 1.13.1 to 1.14.0 , I >> did like below steps: >> >> 1. savepoint running tasks in version1.13.1 >> 2. stop tasks and upgrade cluster version t

stateSerializer(1.14.0) not compatible with previous stateSerializer(1.13.1)

2021-12-09 Thread 诗君
I am trying to upgrade my flink cluster version from 1.13.1 to 1.14.0 , I did like below steps: 1. savepoint running tasks in version1.13.1 2. stop tasks and upgrade cluster version to 1.14.0 3. recover tasks with savepoints and this happened: java.lang.RuntimeException: Error while getting st

Re: Does Flink support TFRecordFileOutputFormat?

2020-07-12 Thread 殿
o.BytesWritable", valueClass="org.apache.hadoop.io.NullWritable”) > 2020年7月13日 下午2:21,Danny Chan 写道: > > I didn’t see any class named TFRecordFileOutputFormat in Spark, for TF do you > mean TensorFlow ? > > Best, > Danny

Does Flink support TFRecordFileOutputFormat?

2020-07-10 Thread 殿
Hi, Does Flink support TFRecordFileOutputFormat? I can't find the relevant information in the document. As far as I know, spark is supportive. Best regards Peidian Li

Prometheus Pushgateway Reporter Can not DELETE metrics on pushgateway

2020-05-12 Thread 佳宸
Hi, I got stuck in using Prometheus,Pushgateway to collect metrics. Here is my configuration about reporter: metrics.reporter.promgateway.class: org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter metrics.reporter.promgateway.host: localhost metrics.reporter.promgateway.port: 9091

Cancel flink job occur exception

2018-09-04 Thread 瑞亮
Hi all, I submit a flink job through yarn-cluster mode and cancel job with savepoint option immediately after job status change to deployed. Sometimes i met this error: org.apache.flink.util.FlinkException: Could not cancel job . at org.apache.flink.client.cli.CliFrontend.lamb

Re: subuquery about flink sql

2018-04-02 Thread
The exception logs tells that your table “myFlinkTable” does not contain a column/field named “t”. Could be something wrong about your table registration. It would be helpful to show us your table registration code, like: // register a Table tableEnv.registerTable("table1", ...)

Re: Multiple (non-consecutive) keyBy operators in a dataflow

2018-04-02 Thread
Hello, In my opinion , it would be meaningful only on this situation: 1. The total size of all your stats is huge enough, e.g. 1GB+. 2. Splitting you job to multiple KeyBy process would reduce the size of your stats. Because operation of saving stats is synchronized and all working threa

Re: Partial aggregation result sink

2018-03-12 Thread
@Override > public TriggerResult onProcessingTime(long time, TimeWindow window, > TriggerContext ctx) throws Exception { > // schedule next timer > ctx.registerProcessingTimeTimer(System.currentTimeMillis() + 1000L); > return TriggerResult.FIRE; > } >

Partial aggregation result sink

2018-03-12 Thread
Hi,team I’m working on a event-time based aggregation application with flink SQL. Is there any way to keep sinking partial aggregation result BEFORE time window closed? For example, My SQL: select … from my_table GROUP BY TUMBLE(`timestamp`, INTERVAL '1’ DAY),othe

Lost connection to task manager

2017-04-26 Thread 猎豹移动 木柯
I run the wordcount example , input data size is 10.9G command: ./bin/flink run -m yarn-cluster -yn 45 -yjm 2048 -ytm 2048 ./examples/batch/WordCount.jar --input /path --output /path1 and finally it throws exceptions as follows Can anyone give me some help?Thanks Caused by: java.lang.Exception: