Hello Kartik,
For your case, if events ingested/Second is 300/60=5 and payload size is
2kb , per second, ingestion size 5*2k=10kb. Network buffer size is 32kb by
default. You can also decrease the value to 16k.
https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/deployment/config/#tas
Thank you. I will check and get back on both the sugesstions made by
Asimansu and Xuyang.
I am using Flink 1.17.0
Regards,
Kartik
On Mon, Apr 1, 2024, 5:13 AM Asimansu Bera wrote:
> Hello Karthik,
>
> You may check the execution-buffer-timeout-interval parameter. This value
> is an important o
Hello Karthik,
You may check the execution-buffer-timeout-interval parameter. This value
is an important one for your case. I had a similar issue experienced in the
past.
https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/deployment/config/#execution-buffer-timeout-interval
For your
Hi Sigalit,
In your settings, I guess each job will only have one slot (parallelism). So is
it too many input for your jobs with parallelism only one? One easy way to
confirm is that you increase your slots and job parallelism twice and then see
whether the QPS is increased.
Hope this would
Hi Sigalit,
First of all, did you ensure different source operator consumes different
consumer id for the kafka source? Did each flink job share the same data or
consumed the data independently?
Moreover, was your job behaves back pressured? It might need to break the
chained operator to see w
Hi Aissa
Looks like your requirements is to enrich a real stream data(from kafka) with
dimension data(your case will like: {sensor_id, equipment_id, workshop_id,
factory_id} ), you can achieve your purpose by Flink DataStream API or just use
FLINK SQL. I think use pure SQL will be esaier if you
You will have to enrich the data coming in for eg- { "equipment-id" :
"1-234", "sensor-id" : "1-vcy", . } . Since you will most likely have
a keyedstream based on equipment-id+sensor-id or equipment-id, you can have
a control stream with data about equipment to workshop/factory mapping
somethin
There is a go cli for automating deploying and udpating flink jobs, you
can integrate Jenkins pipeline with it, maybe it helps.
https://github.com/ing-bank/flink-deployer
Navneeth Krishnan 于2019年4月9日周二 上午10:34写道:
> Hi All,
>
> We have some streaming jobs in production and today we manually de