file, replacing the old one, the new Job can restor state and running
well.
Best regards, leilinee
发件人: Gabor Somogyi
发送时间: 2025年2月10日 16:48
收件人: 李 琳
抄送: user@flink.apache.org
主题: Re: Skip Restore Incompatibility of Key Serializer for Savepoint
Seems like I wa
退订
insights would be of great help to me in resolving this issue. Thank you
very much in advance.
Best regards,Leilinee
获取 Outlook for iOS<https://aka.ms/o0ukef>
发件人: Gabor Somogyi
发送时间: Saturday, February 8, 2025 4:58:48 PM
收件人: 李 琳
抄送: user@flink.apache
Dear All,
In a Flink job using statementSet, I have two DMLs for index calculation. I
made a modification to one of the DMLs, changing the group keys from
"distinct_id, $city, $province, window_start, window_end" to "distinct_id,
window_start, window_end". When attempting to restore the job fro
Dear all,
I am using the flink-connector-hive to sync records into hive table. However,
the sink conector doesn't collect operator running metrics like numRecordsIn.
so I couldn't collect these data.
Is there a way to support the flink-connector-hive to collect flink metrics? Or
is there some m
退订
hello,
we build flink report metrics to prometheus pushgateway, the program has been
running for a period of time, with a amount of data reported to pushgateway,
pushgateway response socket timeout exception, and much of metrics data
reported failed. following is the exception:
2023-12-12 0
Hi all,
Recently, I have been testing the Flink Kubernetes Operator. In the official
example, the checkpoint/savepoint path is configured with a file system:
state.savepoints.dir: file:///flink-data/savepoints
state.checkpoints.dir: file:///flink-data/checkpoints
high-availability:
org.apache
ct the namenode. My main question is how to load the
hdfs-core.xml file in the Flink Kubernetes operator. If you know how to do
that, please let me know.
I hope to receive your response via email. Thank you!
发件人: Dongwoo Kim
发送时间: Wednesday, June 21, 2023 7:56:52
Hi all,
Recently, I have been testing the Flink Kubernetes Operator. In the official
example, the checkpoint/savepoint path is configured with a file system:
state.savepoints.dir: file:///flink-data/savepoints
state.checkpoints.dir: file:///flink-data/checkpoints
high-availability:
org.apache.
Hi,
How's JobManager bring up TaskManager in Application Mode or Session Mode? I
can’t get it even after reading source code of flink operator?
Any help will be appreciate, Thank you.
Mark
Hi Geng Biao,
I works for me, thank you.
At 2022-11-22 23:16:41, "Geng Biao" wrote:
Hi Mark,
I guess you have to create your own local image registry service which your k8s
cluster can connect to and upload the image of flink k8s operator to the
service. After that, you
k-1.0.0-SNAPSHOT.jar/flink-dist_2.12-1.13.5.jar);
I'd suggest to resolve that first and see if the error persists.
On 23/05/2022 14:32, 李诗君 wrote:
flink version: 1.13.5
java code:
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
Env
flink version: 1.13.5
java code:
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
EnvironmentSettings settings = EnvironmentSettings.newInstance()
.useBlinkPlanner()
.inStreamingMode()
.build();
StreamTable
Hi~
I am Jiejin Li, a Flinkusers from china.I failed to submit a flink task using
Flink Standalone mode with the following error:
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could
not find a file system implementation for scheme &aposhdfs&apos. The scheme is
n
>
> On Fri, Dec 10, 2021 at 3:41 AM 李诗君 wrote:
>>
>> I am trying to upgrade my flink cluster version from 1.13.1 to 1.14.0 , I
>> did like below steps:
>>
>> 1. savepoint running tasks in version1.13.1
>> 2. stop tasks and upgrade cluster version t
I am trying to upgrade my flink cluster version from 1.13.1 to 1.14.0 , I did
like below steps:
1. savepoint running tasks in version1.13.1
2. stop tasks and upgrade cluster version to 1.14.0
3. recover tasks with savepoints
and this happened:
java.lang.RuntimeException: Error while getting st
o.BytesWritable",
valueClass="org.apache.hadoop.io.NullWritable”)
> 2020年7月13日 下午2:21,Danny Chan 写道:
>
> I didn’t see any class named TFRecordFileOutputFormat in Spark, for TF do you
> mean TensorFlow ?
>
> Best,
> Danny
Hi,
Does Flink support TFRecordFileOutputFormat? I can't find the relevant
information in the document.
As far as I know, spark is supportive.
Best regards
Peidian Li
Hi,
I got stuck in using Prometheus,Pushgateway to collect metrics. Here is my
configuration about reporter:
metrics.reporter.promgateway.class:
org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter
metrics.reporter.promgateway.host: localhost
metrics.reporter.promgateway.port: 9091
Hi all,
I submit a flink job through yarn-cluster mode and cancel job with
savepoint option immediately after job status change to deployed. Sometimes i
met this error:
org.apache.flink.util.FlinkException: Could not cancel job .
at
org.apache.flink.client.cli.CliFrontend.lamb
The exception logs tells that your table “myFlinkTable” does not contain a
column/field named “t”. Could be something wrong about your table
registration. It would be helpful to show us your table registration code,
like:
// register a Table
tableEnv.registerTable("table1", ...)
Hello,
In my opinion , it would be meaningful only on this situation:
1. The total size of all your stats is huge enough, e.g. 1GB+.
2. Splitting you job to multiple KeyBy process would reduce the size of your
stats.
Because operation of saving stats is synchronized and all working threa
@Override
> public TriggerResult onProcessingTime(long time, TimeWindow window,
> TriggerContext ctx) throws Exception {
> // schedule next timer
> ctx.registerProcessingTimeTimer(System.currentTimeMillis() + 1000L);
> return TriggerResult.FIRE;
> }
>
Hi,team
I’m working on a event-time based aggregation application with flink
SQL. Is there any way to keep sinking partial aggregation result BEFORE time
window closed?
For example, My SQL:
select …
from my_table
GROUP BY TUMBLE(`timestamp`, INTERVAL '1’ DAY),othe
I run the wordcount example , input data size is 10.9G
command: ./bin/flink run -m yarn-cluster -yn 45 -yjm 2048 -ytm 2048
./examples/batch/WordCount.jar --input /path --output /path1
and finally it throws exceptions as follows
Can anyone give me some help?Thanks
Caused by: java.lang.Exception:
26 matches
Mail list logo