Dead Letter Queue for FlinkSQL

2025-02-03 Thread Ilya Karpov
Hi there, Because sink connectors can throw exceptions in real time (for example, due to constraint violation) it's a good practice to have DLQ to continue data processing. Is there any way in FlinkSQL to implement/configure DLQ? Thanks forward!

Re: Kafka SQL Connector loses data if deserialization error occures

2024-10-09 Thread Ilya Karpov
Finally I found that in order to execute many insert statements in one job I need to use STATEMENT SET. This solved the problem. ср, 9 окт. 2024 г. в 12:17, Ilya Karpov : > During this morning debug I've found that if I comment one of two insert > expressions and submit sql, then o

Re: Kafka SQL Connector loses data if deserialization error occures

2024-10-09 Thread Ilya Karpov
NG INTEGER) as payload_browser_version , JSON_VALUE(message, '$.payload.fp_score' RETURNING DOUBLE) as payload_fp_score , TO_TIMESTAMP(JSON_VALUE(message, '$.created_at' RETURNING STRING), 'yyyy-MM-dd HH:mm:ss.SSSX') as created_at FROM default_catalog.default_database.event

Re: Status of ClickHouseSink

2024-10-03 Thread Ilya Karpov
Sounds great, I'll try it, thanks! чт, 3 окт. 2024 г. в 20:54, Yaroslav Tkachenko : > Yes, quite well! :) We've been using it in production for many months now. > > On Thu, Oct 3, 2024 at 10:50 AM Ilya Karpov wrote: > >> Yaroslav, >> Yep, I saw it, did

Re: Status of ClickHouseSink

2024-10-03 Thread Ilya Karpov
gt; >> It works for me with Flink version 1.8. >> >> I am using this in prod. Somehow it’s simpler to use this to ingest data >> into clickhouse than setup Kafka + clickpipe. >> >> On Thu, 3 Oct 2024 at 7:51 PM, Ilya Karpov wrote: >> >>> Seems that

Re: Status of ClickHouseSink

2024-10-03 Thread Ilya Karpov
ckhouse-sink > > > > On Thu, Oct 3, 2024 at 4:54 PM Ilya Karpov wrote: > >> Hi, >> I've been searching for an implementation of kafka to clickhouse sink and >> found FLIP >> <https://cwiki.apache.org/confluence/display/FLINK/%5BDRAFT%5D+FLIP-202%3A+I

Status of ClickHouseSink

2024-10-03 Thread Ilya Karpov
Hi, I've been searching for an implementation of kafka to clickhouse sink and found FLIP and connector sources . Can anyone clarify if

Re: A way to meter number of deserialization errors

2024-06-21 Thread Ilya Karpov
h the idea to understand and > potentially fix the source / parser limitations. > > > > Kind regards, David. > > > > From: Ilya Karpov > Date: Monday, 17 June 2024 at 12:39 > To: user > Subject: [EXTERNAL] A way to meter number of deserialization errors > >

Re: A way to meter number of deserialization errors

2024-06-19 Thread Ilya Karpov
Does anybody experience the problem of metering deserialization errors? пн, 17 июн. 2024 г. в 14:39, Ilya Karpov : > Hi all, > we are planning to use flink as a connector between kafka and > external systems. We use protobuf as a message format in kafka. If > non-backward compat

A way to meter number of deserialization errors

2024-06-17 Thread Ilya Karpov
Hi all, we are planning to use flink as a connector between kafka and external systems. We use protobuf as a message format in kafka. If non-backward compatible changes occur we want to skip those messages ('protobuf.ignore-parse-errors' = 'true') but record an error and raise an alert. I didn't fi

Testing application pipeline with test Source/Sink implementations

2021-11-23 Thread Ilya Karpov
Dear flink community, I'm trying to migrate to flink 1.14 from 1.10, I replaced FlinkKafkaConsumer011->KafkaSource and BucketingSink->FileSink without problems. I want to test whole application pipeline but can't find any Source/Sink classes suitable to substitute KafkaSource/FileSink. Previously I

Re: Flink kafka consumers stopped consuming messages

2021-06-09 Thread Ilya Karpov
ikely fixed in the meantime. So if you > want to be on the safe side, I'd try to upgrade to more recent versions > (Flink + Kafka consumer). > > Best, > > Arvid > > On Wed, Jun 2, 2021 at 7:01 PM Ilya Karpov <mailto:idkf...@gmail.com>> wrote: > Hi there,

Flink kafka consumers stopped consuming messages

2021-06-02 Thread Ilya Karpov
Hi there, today I've observed strange behaviour of a flink streaming application (flink 1.6.1, per-job cluster deployment, yarn): 3 task managers (2 slots each) are running but only 1 slot is actually consuming messages from kafka (v0.11.0.2), others were idling (currentOutputWatermark was stuc

Re: Flink in k8s operators list

2021-05-31 Thread Ilya Karpov
iting for > feed-back. It's all free and OSS, so who are we to complain? Though it's > still an important attention point. > > Hope this helps, > > Svend > > > > > > On Fri, 28 May 2021, at 9:09 AM, Ilya Karpov wrote: >> Hi there, >>

Flink in k8s operators list

2021-05-28 Thread Ilya Karpov
Hi there, I’m making a little research about the easiest way to deploy link job to k8s cluster and manage its lifecycle by k8s operator. The list of solutions is below: - https://github.com/fintechstudios/ververica-platform-k8s-operator - https://github.com/GoogleCloudPlatform/flink-on-k8s-opera

Importance of calling mapState.clear() when no entries in mapState

2020-03-24 Thread Ilya Karpov
Hi, given: - flink 1.6.1 - stateful function with MapState mapState = //init logic; Is there any reason I should call mapState.clear() if I know beforehand that there are no entries in mapState (e.g. mapState.iterator().hasNext() returns false)? Thanks in advance!

Re: kafka corrupt record exception

2019-04-02 Thread Ilya Karpov
According to docs (here: https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html#the-deserializationschema , last paragraph) that’s an expected behaviour. May be

Re: What is Flinks primary API language?

2019-03-26 Thread Ilya Karpov
FILP6: Replace Scala implemented JobManager.scala and TaskManager.scala to > new JobMaster.java and TaskExecutor.java > FILP32: Make flink-table Scala free. > > But for API level, from my point of view, I have never heard any plan to stop > supporting Scala. > > Best >

What is Flinks primary API language?

2019-03-26 Thread Ilya Karpov
Hello, our dev-team is choosing a language for developing Flink jobs. Most likely that we will use flink-streaming api (at least in the very beginning). Because of Spark jobs developing experience we had before the choice for now is scala-api. However recently I’ve found a ticket(https://issues

Scala API support plans

2019-03-13 Thread Ilya Karpov
Hi guys, what are the plans for scala-api support? Asking about it because I’ve recently found that scala-api used to catch up with java one. Thanks!