Aakash Shah created KAFKA-10153:
---
Summary: Documentation for the Errant Record Reporter
Key: KAFKA-10153
URL: https://issues.apache.org/jira/browse/KAFKA-10153
Project: Kafka
Issue Type: Task
Aakash Shah created KAFKA-10115:
---
Summary: Incorporate errors.tolerance with the Errant Record
Reporter
Key: KAFKA-10115
URL: https://issues.apache.org/jira/browse/KAFKA-10115
Project: Kafka
at 2:31 PM Aakash Shah wrote:
> Hi Chris and others,
>
> Yes, you are correct; I looked through KIP-298 to understand it better. I
> agree with your idea to handle "errors.tolerance=none."
>
> I see, you are basically saying you are in favor of standardizing handling
during conversion or transformation. We could
> leave the choice in the hands of developers but this might make things
> confusing for users who get different behavior from different connectors
> under the same circumstances.
>
> Hope this helps!
>
> Cheers,
>
> Chri
Hi Arjun,
I am not very familiar with how the potential heartbeat failure would cause
more failures when consuming subsequent records. Can you elaborate on this?
Thanks,
Aakash
On Tue, May 19, 2020 at 10:03 AM Arjun Satish
wrote:
> One more concern with the connector blocking on the Future's g
Hi Chris,
Thanks for the suggestions.
If "errors.tolerance=none", should it not be the case that the error
reporter does not even report any error; rather, the task just fails after
throwing the error? I do understand the point you are saying about
duplicates, though.
You raise a good point abou
ny task that needs stricter guarantees can use
>the future to block on the reporter, including at some later point in
> time
>after the `report(...)` method is called; and
>- is not onerous because using the future is a common pattern and simple
>blocking, if needed, is tr
h
> reporting).
>
> Cheers,
>
> Chris
>
> On Mon, May 18, 2020 at 1:17 PM Aakash Shah wrote:
>
> > Hi all,
> >
> > Chris, I see your points about whether Futures provide much benefit at
> all
> > as they are not truly fully asynchronous.
&
erything right the first time and provide maximum flexibility doesn't
> seem as pressing, and the goal of minimizing the kind of API that we have
> to support for future versions without making unnecessary additions is
> easier to achieve.
>
> Cheers,
>
> Chris
>
>
>
&
Hi Arjun,
Thanks for your feedback.
I agree with moving to Future, those are good points.
I believe an earlier point made for asynchronous functionality were that
modern APIs tend to be asynchronous as they result in more expressive and
better defined APIs.
Additionally, because a lot of Kafka C
Hello all,
I'd like to open a vote for KIP-610:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-610%3A+Error+Reporting+in+Sink+Connectors
Thanks,
Aakash
My apologies, had a typo. Meant to say "I will now open up a vote."
Thanks,
Aakash
On Sun, May 17, 2020 at 4:55 PM Aakash Shah wrote:
> Hi all,
>
> Thanks for all the feedback thus far. I've updated the KIP with all the
> suggestions. I will not open up a vote.
>
Hi all,
Thanks for all the feedback thus far. I've updated the KIP with all the
suggestions. I will not open up a vote.
Thanks,
Aakash
On Sun, May 17, 2020 at 3:45 PM Randall Hauch wrote:
> All good points regarding `Future` instead of
> `Future`, so +1 to that change.
>
> A few more nits. The
Hi all,
I've updated the KIP to reflect all the new agreed-upon suggestions.
Please let me know if you have any more suggestions.
Thanks,
Aakash
On Sun, May 17, 2020 at 12:06 PM Konstantine Karantasis <
konstant...@confluent.io> wrote:
> Hi all,
>
> I'm on board with adding an interface in the
s set this property for the DLQ to something
> larger
> > > than
> > > > 1 if order is not important and they need the extreme performance.
> > > >
> > > > Along with the original errant sink record, the exception thrown will
> > be
> > > &
P quickly
> > so
> > > we can make sure the other parts are the KIP are acceptable?
> > >
> > > Best regards,
> > >
> > > Randall
> > >
> > > On Sat, May 16, 2020 at 12:24 PM Konstantine Karantasis <
> > > konstant.
to conditionally call one method
> or
> > the other in the framework based on configuration. Once you implement the
> > new `put` with something other than its default implementation, as a
> > connector developer, you'll know to adapt to the above.
> >
> > I d
ies however they want. The only
> > drawback is how this is configured (end-users will have to add more lines
> > in the json/properties files). But all configs can simply come from
> worker,
> > I believe this is relatively minor issue. We should be able to work out
> > co
x27;s not divert from the main use case too much.
>
> I don't easily see how an API definition based on callbacks would simplify
> things here. Keep in mind that we already have the threads we need and we'd
> rather not spin additional. That's the Worker thread that
Just wanted to clarify that I am on board with adding the overloaded
put(...) method.
Thanks,
Aakash
On Fri, May 15, 2020 at 7:00 PM Aakash Shah wrote:
> Hi Randall and Konstantine,
>
> As Chris and Arjun mentioned, I think the main concern is the potential
> gap in which deve
definition results in more
> > expressive
> > > > and well defined APIs.
> > > >
> > >
> > > +1
> > >
> > >
> > > > What you describe is easily an opt-in feature for the connector
> > > developer.
>
think the
> BiFunction definition that returns a Future makes sense.
>
> Konstantine
>
>
>
> On Fri, May 15, 2020 at 11:27 AM Aakash Shah wrote:
>
> > Thanks for the additional feedback.
> >
> > I see the benefits of adding an overloaded put(...) over alternat
>
> Best regards,
>
> Randall
>
> On Fri, May 15, 2020 at 6:59 AM Andrew Schofield <
> andrew_schofi...@live.com>
> wrote:
>
> > Hi,
> > Randall's suggestion is really good. I think it gives the flexibility
> > required and also
> > k
also work in earlier versions of
> > the KC runtime. But, the
> > pattern so far is that the task uses the methods of SinkTaskContext to
> > access utilities in the Kafka
> > Connect runtime, and I suggest that reporting a bad record is such a
> > utility. SinkTaskConte
ask that the KC runtime calls to
> provide the error reporting utility
> seems not to match what has gone before.
>
> Thanks,
> Andrew
>
> On 11/05/2020, 19:05, "Aakash Shah" wrote:
>
> I wasn't previously added to the dev mailing list, so I'd li
Hi Chris,
Thanks for the feedback!
1. Great point, this is the more correct general aim of the proposal.
2. Thanks for the suggestions on points a and b, they are both great. I
will incorporate them.
3. Yep, I'll add this to the sample code and add an explanation.
4. Great point about the addi
I wasn't previously added to the dev mailing list, so I'd like to post my
discussion with Andrew Schofield below for visibility and further
discussion:
Hi Andrew,
Thanks for the reply. The main concern with this approach would be its
backward compatibility. I’ve highlighted the thoughts around th
Aakash Shah created KAFKA-9971:
--
Summary: Error Reporting in Sink Connectors
Key: KAFKA-9971
URL: https://issues.apache.org/jira/browse/KAFKA-9971
Project: Kafka
Issue Type: New Feature
Hello all,
I've created a KIP to handle error reporting for records in sink
connectors, specifically within the context of put(...):
https://cwiki.apache.org/confluence/display/KAFKA/KIP-610%3A+Error+Reporting+in+Sink+Connectors
I would appreciate any kind of feedback.
Thanks,
Aakash
Hello,
I would like to request permission to create a KIP.
My Wiki ID is aakash33 and my email is as...@confluent.io.
Thank you!
Best,
Aakash Shah
30 matches
Mail list logo