trying again to connect to the broker after a disruption ?
>
> 3. In case of the source failing, is there a way in the Flink program
> using the KafkaSource to detect the error and add some error handling
> mechanism..for e.g. sending an alert mail to the stakeholders in
in the Flink program
using the KafkaSource to detect the error and add some error handling
mechanism..for e.g. sending an alert mail to the stakeholders in case
the source fails completely. (Something similar to
"ActionRequestFailureHandler" for ElasticsearchSink)
Many thanks
com> wrote:
> Hello,
>
> We are using "FlinkKafkaConsumer011" as a Kafka source consumer for
> Flink. Please guide on how to implement error handling mechanism for the
> following:
> 1. If the subscription to the Kafka topic gets lost, Kafka connection
> gets disco
Hello,
We are using "FlinkKafkaConsumer011" as a Kafka source consumer for
Flink. Please guide on how to implement error handling mechanism for the
following:
1. If the subscription to the Kafka topic gets lost, Kafka connection
gets disconnected.
In this case, is there any mecha
HI,
I need help with handling errors with the elasticsearch sink as below
2019-11-19 08:09:09,043 ERROR
org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase -
Failed Elasticsearch item request:
[flink-index-deduplicated/nHWQM0XMSTatRri7zw_s_Q][[flink-index-deduplicated][1
---
> From: Steven Nelson
> Sent: Tuesday, June 18, 2019 7:02 PM
> To: user@flink.apache.org
> Subject: Flink error handling
>
>
> Hello!
>
> We are internally having a debate on how best to handle exceptions within our
> operators. Some advocate for wrapping maps/flat
Hi,
Do you have any progress with that?
-Original Message-
From: Steven Nelson
Sent: Tuesday, June 18, 2019 7:02 PM
To: user@flink.apache.org
Subject: Flink error handling
Hello!
We are internally having a debate on how best to handle exceptions within our
operators. Some advocate
Hello!
We are internally having a debate on how best to handle exceptions within our
operators. Some advocate for wrapping maps/flatMaps inside a processfunction
and sending the error to a side output. Other options are returning a custom
Either that gets filtered and mapped into different si
Hi There,
I am trying to parse multiple csv files in a directory using
CsvTableSource and insert each row into cassandra using CassandraSink.
How does flink handle any errors to parse some of the csv files within that
directory?
--
Thanks & Regards,
Athar
I'm not aware of any changes made in this direction.
On 08.05.2018 23:30, Vishnu Viswanath wrote:
Was referring to the original email thread:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Error-handling-td3448.html
On Tue, May 8, 2018 at 5:29 PM, vishnuvisw
Was referring to the original email thread:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Error-handling-td3448.html
On Tue, May 8, 2018 at 5:29 PM, vishnuviswanath <
vishnu.viswanat...@gmail.com> wrote:
> Hi,
>
> Wondering if any of these ideas were implem
Hi,
Wondering if any of these ideas were implemented after the discussion?
Thanks,
Vishnu
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
flink.streaming.runtime.operators.windowing.WindowOperator.trigger(WindowOperator.java:508)
> at
>
> org.apache.flink.streaming.runtime.tasks.StreamTask$TriggerTask.run(StreamTask.java:796)
> ... 7 more
> Caused by: java.io.IOException: search for error
> at
>
> mypackage.flink.pgsql.batchv2.BatchPostgreSqlSinkV4.invoke(BatchPostgreSqlSinkV4.java:48)
> at
>
> mypackage.flink.pgsql.batchv2.BatchPostgreSqlSinkV4.invoke(BatchPostgreSqlSinkV4.java:23)
> at
>
> org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:39)
> at
>
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:373)
> ... 18 more
>
>
>
>
>
> --
> View this message in context:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Error-handling-tp3448p10168.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Error-handling-tp3448p10168.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
> Caused by: java.lang.RuntimeException: Could not forward element to next
> operator
> at
>
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:376)
> at
>
> org.apache.flink.streaming.runtime.tasks.OperatorChain
eratorChain$CopyingChainingOutput.collect(OperatorChain.java:376)
at
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:358)
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Error-handling-tp3448p1014
Hi Nick,
regarding the Kafka example: What happens is that the FlinkKafkaConsumer
will throw an exception. The JobManager then cancels the entire job and
restarts it.
It will then try to continue reading from the last valid checkpoint or the
consumer offset in zookeeper.
Since the data in the topi
>
> The errors outside your UDF (such as network problems) will be handled by
> Flink and cause the job to go into recovery. They should be transparently
> handled.
Is that so? I've been able to feed bad data onto my kafka topic and cause
the stream job to abort. You're saying this should not be
Makes sense. The class of operations that work "per-tuple" before the data
is forwarded to the network stack could be extended to have error traps.
@Nick: Is that what you had in mind?
On Mon, Nov 16, 2015 at 7:27 PM, Aljoscha Krettek
wrote:
> Hi,
> I don’t think that alleviates the problem. So
Hi,
I don’t think that alleviates the problem. Sometimes you might want the system
to continue even if stuff outside the UDF fails. For example, if a serializer
does not work because of a null value somewhere. You would, however, like to
get a message about this somewhere, I assume.
Cheers,
Alj
Hi Nick!
The errors outside your UDF (such as network problems) will be handled by
Flink and cause the job to go into recovery. They should be transparently
handled.
Just make sure you activate checkpointing for your job!
Stephan
On Mon, Nov 16, 2015 at 6:18 PM, Nick Dimiduk wrote:
> I have
>
> I have been thinking about this, maybe we can add a special output stream
> (for example Kafka, but can be generic) that would get errors/exceptions
> that where throws during processing. The actual processing would not stop
> and the messages in this special stream would contain some informati
Hi Nick,
these are some interesting Ideas. I have been thinking about this, maybe we can
add a special output stream (for example Kafka, but can be generic) that would
get errors/exceptions that where throws during processing. The actual
processing would not stop and the messages in this special
Heya,
I don't see a section in the online manual dedicated to this topic, so I
want to raise the question here: How should errors be handled? Specifically
I'm thinking about streaming jobs, which are expected to "never go down".
For example, errors can be raised at the point where objects are seri
24 matches
Mail list logo