Hi
Use it with --command-config client_security.properties and pass below
type configurations in properties file:-
sasl.mechanism=PLAIN
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
required \
username="*" \
password="*";
Congratulations
On Mon, 19 Oct, 2020, 11:02 pm Bill Bejeck, wrote:
> Congratulations Chia-Ping!
>
> -Bill
>
> On Mon, Oct 19, 2020 at 1:26 PM Matthias J. Sax wrote:
>
> > Congrats Chia-Ping!
> >
> > On 10/19/20 10:24 AM, Guozhang Wang wrote:
> > > Hello all,
> > >
> > > I'm happy to announce th
Hi John
Please find my inline response below
Regards and Thanks
Deepak Raghav
On Tue, Sep 1, 2020 at 8:22 PM John Roesler wrote:
> Hi Deepak,
>
> It sounds like you're saying that the exception handler is
> correctly indicating that Streams should "Continue", and
Hi Team
Just a reminder.
Can you please help me with this?
Regards and Thanks
Deepak Raghav
On Tue, Sep 1, 2020 at 1:44 PM Deepak Raghav
wrote:
> Hi Team
>
> I have created a CustomExceptionHandler class by
> implementing DeserializationExceptionHandler interface to handle the
ow if I missed anything.
Regards and Thanks
Deepak Raghav
Hi Tom
Can you please reply to this.
Regards and Thanks
Deepak Raghav
On Mon, Jul 27, 2020 at 10:05 PM Deepak Raghav
wrote:
> Hi Tom
>
> I have to change the log level at runtime i.e without restarting the
> worker process.
>
> Can you please share any suggestion
Hi Tom
I have to change the log level at runtime i.e without restarting the worker
process.
Can you please share any suggestion on this with log4j.
Regards and Thanks
Deepak Raghav
On Mon, Jul 27, 2020 at 7:09 PM Tom Bentley wrote:
> Hi Deepak,
>
> https://issues.apache.org/ji
Hi Team
Request you to please have a look.
Regards and Thanks
Deepak Raghav
On Thu, Jul 23, 2020 at 6:42 PM Deepak Raghav
wrote:
> Hi Team
>
> I have some source connector, which is using the logging provided by
> kafka-connect framework.
>
> Now I need to change the log
log4j2, could you please help me with this.
Regards and Thanks
Deepak Raghav
Hi Robin
Request you to please reply.
Regards and Thanks
Deepak Raghav
On Wed, Jun 10, 2020 at 11:57 AM Deepak Raghav
wrote:
> Hi Robin
>
> Can you please reply.
>
> I just want to add one more thing, that yesterday I tried with
> connect.protocal=eager. Task distrib
Hi Robin
Can you please reply.
I just want to add one more thing, that yesterday I tried with
connect.protocal=eager. Task distribution was balanced after that.
Regards and Thanks
Deepak Raghav
On Tue, Jun 9, 2020 at 2:37 PM Deepak Raghav
wrote:
> Hi Robin
>
> Thanks for your
understanding is correct or not.
Regards and Thanks
Deepak Raghav
On Tue, May 26, 2020 at 8:20 PM Robin Moffatt wrote:
> The KIP for the current rebalancing protocol is probably a good reference:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-415:+Incremental+Cooperative+Re
Hi Team
Just a Gentle Reminder.
Regards and Thanks
Deepak Raghav
On Fri, May 29, 2020 at 1:15 PM Deepak Raghav
wrote:
> Hi Team
>
> Recently, I had seen strange behavior in kafka-connect. We have source
> connector with single task only, which reads from S3 bucket and cop
connector, a task can be
left assigned in both the worker node.
Note : I have seen this only one time, after that it was never reproduced.
Regards and Thanks
Deepak Raghav
I cannot show this mail as a
reference.
It would be very great if you please share any link/discussion reference
regarding the same.
Regards and Thanks
Deepak Raghav
On Thu, May 21, 2020 at 2:12 PM Robin Moffatt wrote:
> I don't think you're right to assert that this is "
chchargeableevent",
"connector": {
"state": "RUNNING",
"worker_id": "10.0.0.5:*8080*"
},
"tasks": [
{
"id": 0,
"state": "RUNNING",
"worker_id": "10.0.0.5:*8080*&qu
ss like below :
*W1* *W2*
C1T1C1T2
C2T2C2T2
I hope, I gave your answer,
Regards and Thanks
Deepak Raghav
On Wed, May 20, 2020 at 4:42 PM Robin Moffatt wrote:
> OK, I understand better now.
>
> You can read more abou
ected.
Regards and Thanks
Deepak Raghav
On Wed, May 20, 2020 at 1:41 PM Robin Moffatt wrote:
> So you're running two workers on the same machine (10.0.0.4), is
> that correct? Normally you'd run one worker per machine unless there was a
> particular reason otherwise.
>
Hi
Please, can anybody help me with this?
Regards and Thanks
Deepak Raghav
On Tue, May 19, 2020 at 1:37 PM Deepak Raghav
wrote:
> Hi Team
>
> We have two worker node in a cluster and 2 connector with having 10 tasks
> each.
>
> Now, suppose if we have two kafka connect pr
19 matches
Mail list logo