other Java primitive types
> since Object keys are much slower when reading from and writing to a state
> store.
>
> Thanks
> Sachin
>
>
> On Wed, May 15, 2024 at 7:58 AM longfeng Xu
> wrote:
>
>> Thank you . we will try .
>>
>> I‘m still confused about mu
hi
there are many flink jobs read one kafka topic in this scenario,
therefore CPU resources waste in serialization/deserialization and
network load is too heavy . Can you recommend a solution to avoid this
situation? e.g it can be more effectively using one large stream job with
multi branchs ?
hi all,
bg:
create a custom metric reporter via kafka,
it works in ideaj local environment.
but failed when packaged and deployed in k8s env (ververica by alibaba)
flink 1.12
config:
metrics.reporter.kafka.factory.class:
org.apache.flink.metrics.kafka.KafkaReporterFactory
metrics.reporter.kafka.se
Hello,
The issue I’m encountering revolves about
1、 aggregating products sales for each minute. Sale data from Kafka with
eventtime.
2、If there is no data in that minute, program should produces default zero.
3、all time I mention are eventtime. No using process time is consider of
rerun situation.
Flink 1.13.3
Custom connector Using flink Kafka connector code and little refactoring;
And Custom connector can be load in flink 1.12 when using
StreamTableEnvironment.
Now flink upgrade to 1.13.3, custom connector dependencies also upgraded to
1.13.3
But failed to load:
java.lang.NoSuchMet
hi guys,
when using alibaba flink (version flink 1.12) to running nexmark's query0
, it failed blackhole is not a supported sink connector.
so how can i upload connector-blackhole like nexmark connector? where to
download it?
thanks