Hi,
Wanted to know if multiple kafka sources are supported in a data pipeline
within flink.
I am looking at a scenario where data transformation and enrichment needs
to be done when a message from both the sources is received based on a
common identifier.
I coded the logic and it looks to be wor
Hi neha,
1. You can set the path of jemalloc into LD_LIBRARY_PATH of YARN[1],
and here is a blog post about "RocksDB Memory Usage"[2].
2. The default value of cleanupInRocksdbCompactFilter is 1000[3],
maybe another value can be set according to the TPS of the job. The
value of `state.backend.rock
hi longfeng,
I think you should check the BlackHole connector related code in which
module, then you can place this module jar to flink lib directory.
Best,
Ron
Hang Ruan 于2023年6月28日周三 16:48写道:
> Hi, longfeng,
>
> I check the blackhole connector document[1] and the blackhole connector is
> a b
Hi, Xiaolong
As Shammon says, I think you should the exception info of Flink cluster
first to confirm the root cause.
Best,
Ron
Shammon FY 于2023年7月4日周二 16:44写道:
> Hi Xiaolong,
>
> I think you may need to check the error log in the flink cluster to find
> out the root cause.
>
> Best,
> Shammon
Hi,
As Hang say, please send an email to user-unsubscr...@flink.apache.org if
you want to unsubscribe the mail from user@flink.apache.org,
Best,
Ron
Ragini Manjaiah 于2023年7月5日周三 13:36写道:
> Unsubscribe
>
> On Tue, Jul 4, 2023 at 1:33 PM Bauddhik Anand wrote:
>
>> Unsubscribe
>>
>
Hm. I wonder if you could implement a custom Deserializer that wraps both
the CSV and Protobuf deserializer, and conditionally chooses which one to
use. As long as the final TypeInformation returned by the Source is the
same in either case, I think it should work?
> Kafka comes from protobuf while
and this is our case Alexander, it is the same data schema but different
data format. Kafka comes from protobuf while the CSV is a POJO though both
have the same fields. IMHO, the design of HybridSource is very limited and
you have to do nasty workarounds if you want to combine from cold storage
(C
I do not think that trying to "squash" two different data types into one
just to use HybridSource is the right thing to do here. HybridSource is
primarily intended for use cases that need to read the same data from
different sources. A typical example: read events from "cold storage" in S3
up to a
Hi all,
While working on a streaming application built with flink I have found some
issues and want to ask for advice.
First, our application's key configurations are like below.
flink version: 1.17.0
state.backend: "rocksdb"
state.backend.incremental: "true"
state.backend.changelog.enabled: "true