Re: Kafka sink producing record at event timestamp

2024-10-25 Thread Sebastian Zapata
there is one option of configuring this in kafka at the topic level """ By default Kafka will use the timestamp provided by the producer. However, you can also make Kafka update the timestamp when it writes the record to the log by setting messa

Kafka sink producing record at event timestamp

2024-10-25 Thread Sachin Mittal
Hi, I am having a pipeline where source and sink are two Kafka topics. The pipeline uses event time semantics, where event time is extracted from the record. What I notice is that when producing records at the sink side, it produces them such that the record's time in the kafka topic is the same a

Re: flink kafka sink batch mode delivery guaranties limitations

2024-08-23 Thread Nicolas Paris
Thanks, missed that warning ! All right

Re: flink kafka sink batch mode delivery guaranties limitations

2024-08-23 Thread Ahmed Hamdy
RollingPolicy <https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/connectors/datastream/filesystem/#rolling-policy> won’t work. Best Regards Ahmed Hamdy On Fri, 23 Aug 2024 at 11:48, Nicolas Paris wrote: > hi > > From my tests kafka sink in exactly-once and ba

flink kafka sink batch mode delivery guaranties limitations

2024-08-23 Thread Nicolas Paris
hi >From my tests kafka sink in exactly-once and batch runtime will never commit the transaction, leading to not honour the semantic. This is likely by design since records are ack/commited during a checkpoint, which never happens in batch mode. I am missing something or the documentation sho

RE: RE: Flink Kafka Sink + Schema Registry + Message Headers

2024-02-01 Thread Kirti Dhar Upadhyay K via user
Thanks Jiabao and Yaroslav for your quick responses. Regards, Kirti Dhar From: Yaroslav Tkachenko Sent: 01 February 2024 21:42 Cc: user@flink.apache.org Subject: Re: RE: Flink Kafka Sink + Schema Registry + Message Headers The schema registry support is provided in

Re: RE: Flink Kafka Sink + Schema Registry + Message Headers

2024-02-01 Thread Yaroslav Tkachenko
K via user wrote: >> > Hi Jiabao, >> > >> > Thanks for reply. >> > >> > Currently I am using Flink 1.16.1 and I am not able to find any >> HeaderProvider setter method in class KafkaRecordSerializationSchemaBuilder. >> > Although on github I

Re: RE: Flink Kafka Sink + Schema Registry + Message Headers

2024-02-01 Thread Yaroslav Tkachenko
ser wrote: > > Hi Jiabao, > > > > Thanks for reply. > > > > Currently I am using Flink 1.16.1 and I am not able to find any > HeaderProvider setter method in class KafkaRecordSerializationSchemaBuilder. > > Although on github I found this support here: > htt

RE: RE: Flink Kafka Sink + Schema Registry + Message Headers

2024-02-01 Thread Jiabao Sun
m using Flink 1.16.1 and I am not able to find any > HeaderProvider setter method in class KafkaRecordSerializationSchemaBuilder. > Although on github I found this support here: > https://github.com/apache/flink-connector-kafka/blob/v3.1/flink-connector-kafka/src/main/java/org/apac

RE: Flink Kafka Sink + Schema Registry + Message Headers

2024-02-01 Thread Kirti Dhar Upadhyay K via user
-kafka/src/main/java/org/apache/flink/connector/kafka/sink/KafkaRecordSerializationSchemaBuilder.java But this doesn't seem released yet. Can you please point me towards correct Flink version? Also, any help on question 1 regarding Schema Registry? Regards, Kirti Dhar -Original Message-

RE: Flink Kafka Sink + Schema Registry + Message Headers

2024-01-31 Thread Jiabao Sun
Hi Kirti, Kafka Sink supports sending messages with headers. You should implement a HeaderProvider to extract headers from input element. KafkaSink sink = KafkaSink.builder() .setBootstrapServers(brokers) .setRecordSerializer(KafkaRecordSerializationSchema.builder

Flink Kafka Sink + Schema Registry + Message Headers

2024-01-31 Thread Kirti Dhar Upadhyay K via user
Hi Mates, I have below queries regarding Flink Kafka Sink. 1. Does Kafka Sink support schema registry? If yes, is there any documentations to configure the same? 2. Does Kafka Sink support sending messages (ProducerRecord) with headers? Regards, Kirti Dhar

Re: Kafka Sink and Kafka transaction timeout

2023-10-02 Thread Tzu-Li (Gordon) Tai
?pageId=255071710 On Mon, Oct 2, 2023 at 2:47 PM Lorenzo Nicora wrote: > Hi team > > In Kafka Sink docs [1], with EXACTLY_ONCE it is recommended to set: > transaction_timeout > maximum_checkpoint duration + > maximum_restart_duration. > > I understand transaction_timeout >

Kafka Sink and Kafka transaction timeout

2023-10-02 Thread Lorenzo Nicora
Hi team In Kafka Sink docs [1], with EXACTLY_ONCE it is recommended to set: transaction_timeout > maximum_checkpoint duration + maximum_restart_duration. I understand transaction_timeout > maximum_checkpoint_duration But why adding maximum_restart_duration? If the application recovers

Re: kafka sink

2023-07-30 Thread nick toker
-3b1f-4f6b-b9a0-6dacb4d5408b', > dataBytes=291}}]}, operatorStateFromStream=StateObjectCollection{[]}, > keyedStateFromBackend=StateObjectCollection{[]}, keyedStateFromStream= > StateObjectCollection{[]}, inputChannelState=StateObjectCollection{[]}, > resultSubpartitionState=StateObjectColle

Re: kafka sink

2023-07-24 Thread nick toker
‪Shammon FY‬‏ <‪zjur...@gmail.com ‬‏>:‬ > Hi nick, > > Is there any error log? That may help to analyze the root cause. > > On Sun, Jul 23, 2023 at 9:53 PM nick toker > wrote: > >> hello >> >> >> we replaced deprecated kafka producer with kaf

Re: kafka sink

2023-07-23 Thread Shammon FY
Hi nick, Is there any error log? That may help to analyze the root cause. On Sun, Jul 23, 2023 at 9:53 PM nick toker wrote: > hello > > > we replaced deprecated kafka producer with kafka sink > and from time to time when we submit a job he stack for 5 min in > inisazaing (

kafka sink

2023-07-23 Thread nick toker
hello we replaced deprecated kafka producer with kafka sink and from time to time when we submit a job he stack for 5 min in inisazaing ( on sink operators) we verify the the transaction prefix is unique it's not happened when we use kafka producer What can be the reason?

Re: Kafka Sink Kafka Producer metrics?

2023-02-07 Thread Andrew Otto
>>> Kafka Source will emit KafkaConsumer metrics >>> <https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/kafka/#kafka-consumer-metrics> >>> . >>> >>> It looks like Kafka Sink >>> <https://nightlies.apache.org/f

Re: Kafka Sink Kafka Producer metrics?

2023-02-06 Thread Mason Chen
t; > On Mon, Feb 6, 2023 at 11:49 AM Andrew Otto wrote: > >> Hi! >> >> Kafka Source will emit KafkaConsumer metrics >> <https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/kafka/#kafka-consumer-metrics> >> . >> >> It

Re: Kafka Sink Kafka Producer metrics?

2023-02-06 Thread Mason Chen
AM Andrew Otto wrote: > Hi! > > Kafka Source will emit KafkaConsumer metrics > <https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/kafka/#kafka-consumer-metrics> > . > > It looks like Kafka Sink > <https://nightlies.apache.org/flink/

Kafka Sink Kafka Producer metrics?

2023-02-06 Thread Andrew Otto
Hi! Kafka Source will emit KafkaConsumer metrics <https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/kafka/#kafka-consumer-metrics> . It looks like Kafka Sink <https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/kafka/#monitorin

Extremely long startup time for exactly_once kafka sink

2023-01-31 Thread Bobby Richard
When enabling exactly_once on my kafka sink, I am seeing extremely long initialization times (over 1 hour), especially after restoring from a savepoint. In the logs I see the job constantly initializing thousands of kafka producers like this: 2023-01-31 14:39:58,150 INFO

Re: Strange issue with exactly once checkpoints and the kafka sink

2022-11-16 Thread Salva Alcántara
order for the kafka sink to automatically adapt setTransactionalIdPrefix("XYZ") // just in case transactions are required ```` make the kafka sink automatically adapt to the checkpointing.mode (that is, use the same guarantee) or on the contrary I should explicitly set

Strange issue with exactly once checkpoints and the kafka sink

2022-11-06 Thread Salva Alcántara
[1]: https://github.com/apache/flink/blob/f494be6956e850d4d1c9fd50b79e5a8dd5b53e47/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/KafkaSinkBuilder.java#L66 [2]: https://github.com/apache/flink/blob/f494be6956e850d4d1c9fd50b79e5a8dd5b53e47/flink-connec

Re: Does kafka key is supported in kafka sink table

2022-05-19 Thread Shengkai Fang
g <24248...@163.com> wrote: > >> Hi dear engineer, >> >> Flink sql supports kafka sink table, not sure whether it supports kafka >> key in kafka sink table? As I want to specify kafka key when inserting >> data into kafka sink table. >> Thanks for your answer in advance. >> >> >> >> Thanks && Regards, >> Hunk >> >> >> >> >

Re: Kafka Sink Key and Value Avro Schema Usage Issues

2022-05-19 Thread Peter Schrott
// TODO Auto-generated catch block > > > > } catch (Exception e) { > > > > } > > > > return keyOut.toByteArray(); > > } > > > > > > > > *From:* Ghiya, Jay (GE Healthcare) > *Sent:* 18 May 2022 21:51 >

RE: Kafka Sink Key and Value Avro Schema Usage Issues

2022-05-18 Thread Ghiya, Jay (GE Healthcare)
2022 21:51 To: user@flink.apache.org Cc: d...@flink.apache.org; Pandiaraj, Satheesh kumar (GE Healthcare) ; Kumar, Vipin (GE Healthcare) Subject: Kafka Sink Key and Value Avro Schema Usage Issues Hi Team, This is regarding Flink Kafka Sink. We have a usecase where we have headers and both key

Kafka Sink Key and Value Avro Schema Usage Issues

2022-05-18 Thread Ghiya, Jay (GE Healthcare)
Hi Team, This is regarding Flink Kafka Sink. We have a usecase where we have headers and both key and value as Avro Schema. Below is the expectation in terms of intuitiveness for avro kafka key and value: KafkaSink.>builder() .setBootstrapServers(cloudkafkaBroker

Re: Does kafka key is supported in kafka sink table

2022-05-17 Thread Dhavan Vaidya
"top" level columns of your sink table (i.e., fields inside Row are not supported, at least in PyFlink). Thanks! On Mon, 16 May 2022 at 19:33, wang <24248...@163.com> wrote: > Hi dear engineer, > > Flink sql supports kafka sink table, not sure whether it supports kafka &g

Does kafka key is supported in kafka sink table

2022-05-16 Thread wang
Hi dear engineer, Flink sql supports kafka sink table, not sure whether it supports kafka key in kafka sink table? As I want to specify kafka key when inserting data into kafka sink table. Thanks for your answer in advance. Thanks && Regards, Hunk

Re: Kafka Sink Error in sending Row Type data in Flink-(version 1.14.4)

2022-04-27 Thread Dian Fu
Hi Harshit, I should have already replied to you in an earlier thread[1] for the same question. It seems that you have missed that. Please double check if that reply is helpful for you. Regards, Dian [1] https://lists.apache.org/thread/cm6r569spq67249dxw57q8lxh0mk3f7y On Wed, Apr 27, 2022 at 6

Kafka Sink Error in sending Row Type data in Flink-(version 1.14.4)

2022-04-27 Thread harshit.varsh...@iktara.ai
Dear Team, I am new to pyflink and request for your support in issue I am facing with Pyflink. I am using Pyflink version 1.14.4 & using reference code from pyflink github. I am getting following error . grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that termin

Re: KAFKA SINK ERROR IN Flink-(version 1.14.4)

2022-04-23 Thread Dian Fu
Hi Harshit, Could you try to update the following line `ds = ds.map(lambda x: ','.join([str(value) for value in x]))` as following: `ds = ds.map(lambda x: ','.join([str(value) for value in x]), output_type=Types.STRING())` The reason is that if the output type is not specified, it will be seriali

KAFKA SINK ERROR IN Flink-(version 1.14.4)

2022-04-22 Thread harshit.varsh...@iktara.ai
Dear Team, I am new to pyflink and request for your support in issue I am facing with Pyflink. I am using Pyflink version 1.14.4 & using reference code from pyflink getting started pages. I am getting following error . py4j.protocol.Py4JJavaError: An error occurred while calling o10.exec

Re: how to set kafka sink ssl properties

2022-03-22 Thread HG
Hello Matthias and others I am trying to configure a Kafka Sink with SSL properties as shown further below. But in the logs I see warnings: 2022-03-21 12:30:17,108 WARN org.apache.kafka.clients.admin.AdminClientConfig [] - The configuration 'group.id' was supplied but isn

Re: how to set kafka sink ssl properties

2022-03-18 Thread Qingsheng Ren
working? Is the > ssl.trustore.location accessible from the Flink nodes? > > Matthias > > On Thu, Mar 17, 2022 at 4:00 PM HG wrote: > Hi all, > I am probably not the smartest but I cannot find how to set ssl-properties > for a Kafka Sink. > My assump

Re: how to set kafka sink ssl properties

2022-03-17 Thread HG
On Thu, Mar 17, 2022 at 4:00 PM HG wrote: > >> Hi all, >> I am probably not the smartest but I cannot find how to set >> ssl-properties for a Kafka Sink. >> My assumption was that it would be just like the Kafka Consumer >> >> KafkaSource source = KafkaSource.builder

Re: how to set kafka sink ssl properties

2022-03-17 Thread Matthias Pohl
Could you share more details on what's not working? Is the ssl.trustore.location accessible from the Flink nodes? Matthias On Thu, Mar 17, 2022 at 4:00 PM HG wrote: > Hi all, > I am probably not the smartest but I cannot find how to set ssl-properties > for a Kafka Sink. >

how to set kafka sink ssl properties

2022-03-17 Thread HG
Hi all, I am probably not the smartest but I cannot find how to set ssl-properties for a Kafka Sink. My assumption was that it would be just like the Kafka Consumer KafkaSource source = KafkaSource.builder() .setProperties(kafkaProps) .setProperty("ssl.truststore

Re: Query regarding Kafka Source and Kafka Sink in 1.14.3

2022-01-24 Thread Caizhi Weng
Hi! All properties you set by calling KafkaSource.builder().setProperty() will also be given to KafkaConsumer (see [1]). However these two properties are specific to Flink and Kafka does not know them, so Kafka will produce a warning message. These messages are harmless as long as the properties y

Query regarding Kafka Source and Kafka Sink in 1.14.3

2022-01-23 Thread Mahima Agarwal
Hi Team, I am trying to set the following properties in Kafka Source API in flink 1.14.3 version. -> client.id.prefix -> partition.discovery.interval.ms But I am getting the below mentioned warning in taskmanager logs: 1. WARN org.apache.kafka.clients.consumer.ConsumerConfig [] - Th

Re: ProducerRecord with Kafka Sink for 1.8.0

2020-05-14 Thread Nick Bendtner
te: > >> > > >> > Hi Gary, > >> > Thanks for the info. I am aware this feature is available in 1.9.0 > onwards. Our cluster is still very old and have CICD challenges,I was > hoping not to bloat up the application jar by packaging even flink-core > with it.

Re: ProducerRecord with Kafka Sink for 1.8.0

2020-05-14 Thread Gary Yao
e in 1.9.0 onwards. Our cluster is still very old and have CICD challenges,I was hoping not to bloat up the application jar by packaging even flink-core with it. If its not possible to do this with older version without writing our own kafka sink implementation similar to the flink provided version in

Re: ProducerRecord with Kafka Sink for 1.8.0

2020-05-12 Thread Nick Bendtner
m aware this feature is available in 1.9.0 > onwards. Our cluster is still very old and have CICD challenges,I was > hoping not to bloat up the application jar by packaging even flink-core > with it. If its not possible to do this with older version without writing > our own kafka sink i

Re: ProducerRecord with Kafka Sink for 1.8.0

2020-05-12 Thread Gary Yao
gt; possible to do this with older version without writing our own kafka sink > implementation similar to the flink provided version in 1.9.0 then I think we > will pack flink-core 1.9.0 with the application and follow the approach that > you suggested. Thanks again for getting back

Re: ProducerRecord with Kafka Sink for 1.8.0

2020-05-12 Thread Nick Bendtner
own kafka sink implementation similar to the flink provided version in 1.9.0 then I think we will pack flink-core 1.9.0 with the application and follow the approach that you suggested. Thanks again for getting back to me so quickly. Best, Nick On Tue, May 12, 2020 at 3:37 AM Gary Yao wrote: >

Re: ProducerRecord with Kafka Sink for 1.8.0

2020-05-12 Thread Gary Yao
/flink/streaming/connectors/kafka/KafkaSerializationSchema.html On Mon, May 11, 2020 at 10:59 PM Nick Bendtner wrote: > > Hi guys, > I use 1.8.0 version for flink-connector-kafka. Do you have any > recommendations on how to produce a ProducerRecord from a kafka sink. Looking > to

ProducerRecord with Kafka Sink for 1.8.0

2020-05-11 Thread Nick Bendtner
Hi guys, I use 1.8.0 version for flink-connector-kafka. Do you have any recommendations on how to produce a ProducerRecord from a kafka sink. Looking to add support to kafka headers therefore thinking about ProducerRecord. If you have any thoughts its highly appreciated. Best, Nick.

Re: Re: Dose flink-1.10 sql-client support kafka sink?

2020-03-11 Thread wangl...@geekplus.com.cn
flink-sql-connector-kafka_2.11-1.9.1.jar in flink-1.10 environment. Thanks, Lei wangl...@geekplus.com.cn From: wangl...@geekplus.com.cn Date: 2020-03-11 14:51 To: Jark Wu CC: Arvid Heise; user Subject: Re: Re: Dose flink-1.10 sql-client support kafka sink? Hi Jark, I have tried to use CREATE

Re: Re: Dose flink-1.10 sql-client support kafka sink?

2020-03-11 Thread Jark Wu
:* Jark Wu > *Date:* 2020-03-11 11:13 > *To:* wangl...@geekplus.com.cn > *CC:* Arvid Heise ; user > *Subject:* Re: Re: Dose flink-1.10 sql-client support kafka sink? > Hi Lei, > > CREATE TABLE DDL [1][2] is the recommended way to register a table since > 1.9. And the ya

Re: Re: Dose flink-1.10 sql-client support kafka sink?

2020-03-10 Thread wangl...@geekplus.com.cn
can select the result correctly Thanks, Lei From: Jark Wu Date: 2020-03-11 11:13 To: wangl...@geekplus.com.cn CC: Arvid Heise; user Subject: Re: Re: Dose flink-1.10 sql-client support kafka sink? Hi Lei, CREATE TABLE DDL [1][2] is the recommended way to register a table since 1.9. And the ya

Re: Re: Kafka sink only support append mode?

2020-03-10 Thread Jark Wu
Wu > *Date:* 2020-03-09 19:25 > *To:* wangl...@geekplus.com.cn > *CC:* user > *Subject:* Re: Kafka sink only support append mode? > Hi Lei, > > Yes. Currently, Kafka sink only supports append mode. Other update mode > (e.g. upsert mode / retract mode) is on the agenda. > For now, you

Re: Re: Dose flink-1.10 sql-client support kafka sink?

2020-03-10 Thread Jark Wu
t;schema: "ROW(out_order_code STRING,input_date BIGINT, owner_code > STRING, status INT)" > > under format label. > > *From:* Arvid Heise > *Date:* 2020-03-10 20:51 > *To:* wangl...@geekplus.com.cn > *CC:* user > *Subject:* Re: Dose flink-1.10 sql-client s

Re: Re: Dose flink-1.10 sql-client support kafka sink?

2020-03-10 Thread wangl...@geekplus.com.cn
nt support kafka sink? Hi Lei, yes Kafka as a sink is supported albeit only for appends (no deletions/updates yet) [1]. An example is a bit hidden in the documentation [2]: tables: - name: MyTableSink type: sink-table update-mode: append connector: property-version: 1 t

Re: Dose flink-1.10 sql-client support kafka sink?

2020-03-10 Thread Arvid Heise
Hi Lei, yes Kafka as a sink is supported albeit only for appends (no deletions/updates yet) [1]. An example is a bit hidden in the documentation [2]: tables: - name: MyTableSink type: sink-table update-mode: append connector: property-version: 1 type: kafka versio

Re: Re: Kafka sink only support append mode?

2020-03-10 Thread wangl...@geekplus.com.cn
appendable. It confused me. Thanks, Lei From: Jark Wu Date: 2020-03-09 19:25 To: wangl...@geekplus.com.cn CC: user Subject: Re: Kafka sink only support append mode? Hi Lei, Yes. Currently, Kafka sink only supports append mode. Other update mode (e.g. upsert mode / retract mode) is on the agenda. For

Dose flink-1.10 sql-client support kafka sink?

2020-03-10 Thread wangl...@geekplus.com.cn
I have configured source table successfully using the following configuration: - name: out_order type: source update-mode: append schema: - name: out_order_code type: STRING - name: input_date type: BIGINT - name: owner_code type: STRING connector:

Re: Kafka sink only support append mode?

2020-03-09 Thread Jark Wu
Hi Lei, Yes. Currently, Kafka sink only supports append mode. Other update mode (e.g. upsert mode / retract mode) is on the agenda. For now, you can customize a KafkaTableSink with implementing UpsertStreamTableSink interface, where you will get a Tuple2 records, and the Boolean represents insert

Kafka sink only support append mode?

2020-03-09 Thread wangl...@geekplus.com.cn
I wrote a simple program reading from kafka using sql and sink to kafka. But only 'update-mode' = 'append' is supported for sink table and the query sql must have no group statement. Only append mode is supported for kafka sink? Thanks, Lei

Re: Partitioning based on key flink kafka sink

2019-11-06 Thread vino yang
Hi Vishwas, You should pay attention to the other args. The constructor provided by you has a `KeyedSerializationSchema` arg, while the comments of the constructor which made you confused only has a `SerializationSchema` arg. That's their difference. Best, Vino Vishwas Siravara 于2019年11月6日周三 上

Partitioning based on key flink kafka sink

2019-11-05 Thread Vishwas Siravara
Hi all, I am using flink 1.7.0 and using this constructor FlinkKafkaProducer(String topicId, KeyedSerializationSchema serializationSchema, Properties producerConfig) >From the doc it says this constructor uses fixed partitioner. I want to partition based on key , so I tried to use this public Fl

Re: about Kafka sink and 2PC function

2019-10-18 Thread Andrey Zagrebin
Hi, This is the contract of 2PC transactions. Multiple commit retries should result in only one commit which actually happens in the external system. The external system has to support deduplication of committed transactions, e.g. by some unique id. Best, Andrey > On 10 Oct 2019, at 07:15, 121

about Kafka sink and 2PC function

2019-10-09 Thread 121476...@qq.com
After reading about FlinkKafkaProducer011 and 2PC function in FLINK, I know, when snapshotState(), preCommit currentTransaction. add to the State. when Checkpoint done and notifyCheckpointComplete(), producer will commit currentTransaction to brokers. when initializeState(), restore from State. c

Re: Throttling/effective back-pressure on a Kafka sink

2019-05-30 Thread Derek VerLee
Hi We’ve got a job producing to a Kafka sink. The Kafka topics have a retention of 2 weeks. When doing a complete replay, it seems like Flink isn’t able to back-pressure or throttle the amount o

Re: how to count kafka sink number

2019-05-13 Thread Konstantin Knauf
Hi Chong, to my knowledge, neither Flink's built-in metrics nor the metrics of the Kafka producer itself give you this number directly. If your sink is chained (no serialization, no network) to another Flink operator, you could take the numRecordsOut of this operator instead. It will tell you how

how to count kafka sink number

2019-05-12 Thread jszhouch...@163.com
hi i have a flink sql, reading record from kafka, then use table function do some transformation, then produce to kafka. i have found that in the flink web record received of the first subTask is always 0 ,and the Records send of the last subTask is 0 as well. i want to count how many r

Re: Throttling/effective back-pressure on a Kafka sink

2019-03-28 Thread Konstantin Knauf
frame. Feel free to send these logs to me directly, if you don't want to share them on the list. Best, Konstantin On Thu, Mar 28, 2019 at 2:04 PM Marc Rooding wrote: > Hi > > We’ve got a job producing to a Kafka sink. The Kafka topics have a > retention of 2 weeks. When doing

Throttling/effective back-pressure on a Kafka sink

2019-03-28 Thread Marc Rooding
Hi We’ve got a job producing to a Kafka sink. The Kafka topics have a retention of 2 weeks. When doing a complete replay, it seems like Flink isn’t able to back-pressure or throttle the amount of messages going to Kafka, causing the following error

Re: Problem when use kafka sink with EXACTLY_ONCE in IDEA

2019-01-02 Thread Till Rohrmann
fka/clients/producer/KafkaProducer.html Cheers, Till On Wed, Jan 2, 2019 at 5:02 AM Kaibo Zhou wrote: > Hi, > I encountered an error while running the kafka sink demo in IDEA. > > This is the complete code: > > import java.util.

Problem when use kafka sink with EXACTLY_ONCE in IDEA

2019-01-01 Thread Kaibo Zhou
Hi, I encountered an error while running the kafka sink demo in IDEA. This is the complete code: import java.util.Properties import org.apache.flink.api.common.serialization.SimpleStringSchema import org.apache.flink.runtime.state.filesystem.FsStateBackend import

Re: Dulicated messages in kafka sink topic using flink cancel-with-savepoint operation

2018-12-03 Thread Piotr Nowojski
Nastaran Motavali > Cc: user@flink.apache.org > Subject: Re: Dulicated messages in kafka sink topic using flink > cancel-with-savepoint operation > > Hi Nastaran, > > When you are checking for duplicated messages, are you reading from kafka > using `read_commited

Re: Dulicated messages in kafka sink topic using flink cancel-with-savepoint operation

2018-12-01 Thread Nastaran Motavali
his sink, the duplicated messages have not been read so everything is OK. Kind regards, Nastaran Motavalli From: Piotr Nowojski Sent: Thursday, November 29, 2018 3:38:38 PM To: Nastaran Motavali Cc: user@flink.apache.org Subject: Re: Dulicated messages in kaf

Re: Dulicated messages in kafka sink topic using flink cancel-with-savepoint operation

2018-11-29 Thread Piotr Nowojski
Hi Nastaran, When you are checking for duplicated messages, are you reading from kafka using `read_commited` mode (this is not the default value)? https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html#kafka-producer-partitioning-scheme > Semantic.EXACTLY_ONCE: uses Ka

Dulicated messages in kafka sink topic using flink cancel-with-savepoint operation

2018-11-28 Thread Nastaran Motavali
Hi, I have a flink streaming job implemented via java which reads some messages from a kafka topic, transforms them and finally sends them to another kafka topic. The version of flink is 1.6.2 and the kafka version is 011. I pass the Semantic.EXACTLY_ONCE parameter to the producer. The problem i

How to handle exceptions in Kafka sink?

2018-09-13 Thread HarshithBolar
I have a Flink job that writes data into Kafka. The Kafka topic has maximum message size set to 5 MB, so if I try to write any record larger than 5 MB, it throws the following exception and brings the job down.

Re: Want to write Kafka Sink to SQL Client by Flink-1.5

2018-07-11 Thread Shivam Sharma
Awesome!!! It will be helpful if you share that PR. Thanks

Re: Want to write Kafka Sink to SQL Client by Flink-1.5

2018-07-11 Thread Timo Walther
Hi Shivam, a Kafka sink for the SQL Client will be part of Flink 1.6. For this we need to do provide basic interfaces that sinks can extends as Rong mentioned (FLINK-8866). In order to support all formats that also sources support we also working on separating the connector from the formats

Re: Want to write Kafka Sink to SQL Client by Flink-1.5

2018-07-11 Thread Rong Rong
Hi Shivam, Thank you for interested in contributing to Kafka Sink for SQL client. Could you share your plan for implementation. I have some questions as there might have been some overlap with current implementations. On a higher level, 1. Are you using some type of metadata store to host topic

Want to write Kafka Sink to SQL Client by Flink-1.5

2018-07-10 Thread Shivam Sharma
Hi All, We want to write Kafka Sink functionality for Flink(1.5) SQL Client. We have read the code and chalk out a rough plan for implementation. Any guidance for this implementation will be very helpful. Thanks -- Shivam Sharma Data Engineer @ Goibibo Indian Institute Of Information

Re: Table/SQL Kafka Sink Question

2018-03-27 Thread Timo Walther
specify a *message key* using the Kafka sink in the Table/SQL API ? The goal is to sink each row as JSON along side with a message key into Kafka. I was achieving it using the Stream API by specifying a *KeyedSerializationSchema* using the *serializeKey() *method. Thanks in advance!

Re: Table/SQL Kafka Sink Question

2018-03-27 Thread Alexandru Gutan
5, Chesnay Schepler wrote: > Hello, > > as far as i can this is not possible. I'm including Timo, maybe he can > explain why this isn't supported. > > > On 26.03.2018 21:56, Pavel Ciorba wrote: > > Hi everyone! > > Can I specify a *message key* using the Ka

Re: Table/SQL Kafka Sink Question

2018-03-27 Thread Chesnay Schepler
Hello, as far as i can this is not possible. I'm including Timo, maybe he can explain why this isn't supported. On 26.03.2018 21:56, Pavel Ciorba wrote: Hi everyone! Can I specify a *message key* using the Kafka sink in the Table/SQL API ? The goal is to sink each row as JSON

Table/SQL Kafka Sink Question

2018-03-26 Thread Pavel Ciorba
Hi everyone! Can I specify a *message key* using the Kafka sink in the Table/SQL API ? The goal is to sink each row as JSON along side with a message key into Kafka. I was achieving it using the Stream API by specifying a *KeyedSerializationSchema* using the *serializeKey() *method. Thanks in

Re: Spread Kafka sink tasks over different nodes

2018-03-06 Thread Dongwon Kim
Hi Aljoscha and Robert, You guys are right. I resubmit the application with # session window tasks equal to # Kafka sink tasks. I never thought that multiple different Kafka tasks can write to the same partition. Initially, I do not set the default parallelism and I explicitly set

Re: Spread Kafka sink tasks over different nodes

2018-03-06 Thread Aljoscha Krettek
three types of tasks: > - Kafka source tasks (Parallelism : 7 as the number of partitions in the > input topic is 7) > - Session window tasks (Parallelism : 224) > - Kafka sink tasks (Parallelism : 7 as the number of partitions in the > output topic is 7) > > We want 7 sour

All accumulators/ counters which are set to Kfka Source or Kafka Sink are missing after upgrading to Flink Kafka 0.10 ?

2017-04-17 Thread sohimankotia
before upgrading to 0.10 . -- View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/All-accumulators-counters-which-are-set-to-Kfka-Source-or-Kafka-Sink-are-missing-after-upgrading-to--tp12637.html Sent from the Apache Flink User Mailing List archive