Hi, All
My flink version is 1.13.1 and my company have two hadoop cluster,
offline hadoop cluster and realtime hadoop cluster. Now, on realtime hadoop
cluster, we want to submit flink job to connect offline hadoop cluster by
different hive catalog. I use different hive configuration diretory in h
\
-p 64 \
-s
hdfs://ztcluster/flink_realtime_warehouse/checkpoint/UserClickLogAll/savepoint/savepoint-59cf6c-f82cb4317494
\
-n \
-c com.datacenter.etl.ods.common.mobile.UserClickLogAll \
/opt/case/app/realtime/v1.0/batch/buryingpoint/paiping/all/realtime_etl-1.0-SNAPSHOT.jar
Jim Chen 于2021年8月2日周一 下
Hi all, my flink job consume kafka topic A, and write to kafka topic B.
When i restart my flink job via savepoint, topic B have some duplicate
message. Any one can help me how to solve this problem? Thanks!
My Versions:
Flink 1.12.4
Kafka 2.0.1
Java 1.8
Core code:
env.enableCheckpointing(30);
Hi,
I use flink1.10.1 sql to connect hbase1.4.3. When inset into hbase,
report an error like validateSchemaAndApplyImplicitCast. Means that the
Query Schema and Sink Schema are inconsistent.
Mainly Row (EXPR$0) in Query Schema, which are all expressions. Sink
Schema is Row(device_id). I don't k
Hi, everyone!
When i use flink1.10 to define table, and i want to define the json array
as the string type. But the query resutl is null when i execute the program.
The detail code as follow:
package com.flink;
import
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import
.
>
> czw., 26 mar 2020 o 08:38 Jim Chen
> napisał(a):
>
>> Thanks!
>>
>> I made a mistake. I forget to set the auto.offset.reset=false. It's my
>> fault.
>>
>> Dominik Wosiński 于2020年3月25日周三 下午6:49写道:
>>
>>> Hi Jim,
>>&g
Hi, All
When i use the Tumbling Windows, find lost some record. My code as follow
*env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);*
*env.addSource(FlinkKafkaConsumer011..)*
*.assignTimestampsAndWatermarks(new
BoundedOutOfOrdernessTimestampExtractor(Time.minutes(3)) {
> Best Regards,
> Dom.
>
> śr., 25 mar 2020 o 11:19 Jim Chen
> napisał(a):
>
>> Hi, All
>> I use flink-connector-kafka-0.11 consume the Kafka0.11. In
>> KafkaConsumer params, i set the group.id and auto.offset.reset. In the
>> Flink1.10, set the kaf
Hi, All
I use flink-connector-kafka-0.11 consume the Kafka0.11. In KafkaConsumer
params, i set the group.id and auto.offset.reset. In the Flink1.10, set
the kafkaConsumer.setStartFromGroupOffsets();
Then, i restart the application, found the offset is not from the last
position. Any one know wh
chain after the source /
> before the sink, or query the numRecordsOut metric for the source /
> numRecordsIn metric for the sink via the WebUI metrics tab or REST API.
>
> On 25/03/2020 10:49, Jim Chen wrote:
>
> Hi, all
> When I use flink-connector-kafka-0.11 consume Kafka0
Hi, all
When I use flink-connector-kafka-0.11 consume Kafka0.11, the Cluster
web's Received Record is always 0. However, the log is not empty. Any one
can help me?
[image: image.png]
11 matches
Mail list logo