Offsets across brokers

2017-08-28 Thread Vignesh
Hi,

If a topic partition is replicated and leader switches from broker 1 to
broker 2 , are the offsets for messages in broker 2 same as broker1 ? If
not, how can applications that store offsets outside of Kafka handle the
difference?

Thanks,
Vignesh.


In which scenarios would "INVALID_REQUEST" be returned for "Offset Request"

2017-09-22 Thread Vignesh
Hi,

In which scenarios would we get "INVALID_REQUEST" for a Version 1 "Offset
Request"  (https://kafka.apache.org/protocol#The_Messages_Offsets)  ?

I searched for INVALID_REQUEST in https://github.com/apache/kafka and below
is the only file that seems related.

https://github.com/apache/kafka/blob/96ba21e0dfb1a564d5349179d844f020abf1e08b/clients/src/main/java/org/apache/kafka/common/protocol/Errors.java

Here, I see that invalid request is returned only on duplicate topic
partition. Is that the only reason?

The description for the error is broader though.

"
This most likely occurs because of a request being malformed by the client
library or the message was sent to an incompatible broker. See the broker
logs for more details.

"

Thanks,
Vignesh.


Re: In which scenarios would "INVALID_REQUEST" be returned for "Offset Request"

2017-09-26 Thread Vignesh
I am just sending the request directly using my own client, Protocol api
version I used is "1" https://kafka.apache.org/protocol#The_Messages_Offsets

Broker version is .10.2.0 . .This broker version supports protocol version
1.

Where are the logs related to such errors stored? Also, is this error level
enabled by default? If not, How can I enable it?

Thanks,
Vignesh.

On Sun, Sep 24, 2017 at 12:52 PM, James Cheng  wrote:

> Your client library might be sending a message that is too old or too new
> for your broker to understand.
>
> What version is your Kafka client library, and what version is your broker?
>
> -James
>
> Sent from my iPhone
>
> > On Sep 22, 2017, at 4:09 PM, Vignesh  wrote:
> >
> > Hi,
> >
> > In which scenarios would we get "INVALID_REQUEST" for a Version 1 "Offset
> > Request"  (https://kafka.apache.org/protocol#The_Messages_Offsets)  ?
> >
> > I searched for INVALID_REQUEST in https://github.com/apache/kafka and
> below
> > is the only file that seems related.
> >
> > https://github.com/apache/kafka/blob/96ba21e0dfb1a564d5349179d844f0
> 20abf1e08b/clients/src/main/java/org/apache/kafka/common/
> protocol/Errors.java
> >
> > Here, I see that invalid request is returned only on duplicate topic
> > partition. Is that the only reason?
> >
> > The description for the error is broader though.
> >
> > "
> > This most likely occurs because of a request being malformed by the
> client
> > library or the message was sent to an incompatible broker. See the broker
> > logs for more details.
> >
> > "
> >
> > Thanks,
> > Vignesh.
>


Re: In which scenarios would "INVALID_REQUEST" be returned for "Offset Request"

2017-09-26 Thread Vignesh
We see even when we use  https://github.com/edenhill/librdkafka nuget
version .11.0, https://www.nuget.org/packages/librdkafka.redist/

-Vignesh.

On Tue, Sep 26, 2017 at 11:03 AM, Vignesh  wrote:

> I am just sending the request directly using my own client, Protocol api
> version I used is "1" https://kafka.apache.org/
> protocol#The_Messages_Offsets
> Broker version is .10.2.0 . .This broker version supports protocol version
> 1.
>
> Where are the logs related to such errors stored? Also, is this error
> level enabled by default? If not, How can I enable it?
>
> Thanks,
> Vignesh.
>
> On Sun, Sep 24, 2017 at 12:52 PM, James Cheng 
> wrote:
>
>> Your client library might be sending a message that is too old or too new
>> for your broker to understand.
>>
>> What version is your Kafka client library, and what version is your
>> broker?
>>
>> -James
>>
>> Sent from my iPhone
>>
>> > On Sep 22, 2017, at 4:09 PM, Vignesh  wrote:
>> >
>> > Hi,
>> >
>> > In which scenarios would we get "INVALID_REQUEST" for a Version 1
>> "Offset
>> > Request"  (https://kafka.apache.org/protocol#The_Messages_Offsets)  ?
>> >
>> > I searched for INVALID_REQUEST in https://github.com/apache/kafka and
>> below
>> > is the only file that seems related.
>> >
>> > https://github.com/apache/kafka/blob/96ba21e0dfb1a564d534917
>> 9d844f020abf1e08b/clients/src/main/java/org/apache/kafka/
>> common/protocol/Errors.java
>> >
>> > Here, I see that invalid request is returned only on duplicate topic
>> > partition. Is that the only reason?
>> >
>> > The description for the error is broader though.
>> >
>> > "
>> > This most likely occurs because of a request being malformed by the
>> client
>> > library or the message was sent to an incompatible broker. See the
>> broker
>> > logs for more details.
>> >
>> > "
>> >
>> > Thanks,
>> > Vignesh.
>>
>
>


Debugging invalid_request response from a .10.2 server for list offset api using librdkafka client

2017-09-27 Thread Vignesh
Hi,

We are using LibrdKafka library version .11.0 and calling List Offset API
with a timestamp on a 0.10.2 kafka server installed in a windows machine.

This request returns an error code, 43 - INVALID_REQUEST.

We have other local installations of Kafka version 0.10.2 (also on Windows)
and are able to use the library successfully.

Are there any settings on this specific server that is causing this error?
Which logs can we enable and look at to get additional details about what
is wrong with the request?

Thanks,
Vignesh.


Re: Debugging invalid_request response from a .10.2 server for list offset api using librdkafka client

2017-09-27 Thread Vignesh
Correction in above mail, we get 42 - INVALID_REQUEST, not 43.
Few other data points

Server has following configs set

inter.broker.protocol.version=0.8.1

log.message.format.version=0.8.1



My understanding is that we should get unsupported message format with
above configurations, why do we get invalid_request?


Thanks,

Vignesh.




On Wed, Sep 27, 2017 at 9:51 AM, Vignesh  wrote:

> Hi,
>
> We are using LibrdKafka library version .11.0 and calling List Offset API
> with a timestamp on a 0.10.2 kafka server installed in a windows machine.
>
> This request returns an error code, 43 - INVALID_REQUEST.
>
> We have other local installations of Kafka version 0.10.2 (also on
> Windows) and are able to use the library successfully.
>
> Are there any settings on this specific server that is causing this error?
> Which logs can we enable and look at to get additional details about what
> is wrong with the request?
>
> Thanks,
> Vignesh.
>


Re: Debugging invalid_request response from a .10.2 server for list offset api using librdkafka client

2017-09-27 Thread Vignesh
I understand that it won't support it, my only concern is about the error
code.

Locally with these settings I get a message formatted error, 43 . Which
makes sense.
In one particular cluster we see an invalid request 42 instead of
unsupported format 43.


What are the implications of changing the broker protocol version to 10.2,
for the topics that were created before this change? My assumption is that
they will return 43 for list offset request version 1+ and all other
requests would work. Is that correct?

Also, can the message format be changed for a topic from 0.8.1 to 0.10.2?
If not, what is the recommended way to upgrade old topics.


On Sep 27, 2017 11:15 AM, "Hans Jespersen"  wrote:

> The 0.8.1 protocol does not support target timestamps so it makes sense
> that you would get an invalid request error if the client is sending a
> Version 1 or Version 2 Offsets Request. The only Offset Request that a
> 0.8.1 broker knows how to handle is a Version 0 Offsets Request.
>
> From https://kafka.apache.org/protocol
> INVALID_REQUEST 42 False This most likely occurs because of a request being
> malformed by the client library or the message was sent to an incompatible
> broker. See the broker logs for more details.
>
> For more info on the 0.11 Kafka protocol and ListOffset Requests see
>
> https://cwiki.apache.org/confluence/display/KAFKA/A+
> Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-
> OffsetAPI(AKAListOffset)
>
> -hans
>
> /**
>  * Hans Jespersen, Principal Systems Engineer, Confluent Inc.
>  * h...@confluent.io (650)924-2670
>  */
>
> On Wed, Sep 27, 2017 at 10:20 AM, Vignesh  wrote:
>
> > Correction in above mail, we get 42 - INVALID_REQUEST, not 43.
> > Few other data points
> >
> > Server has following configs set
> >
> > inter.broker.protocol.version=0.8.1
> >
> > log.message.format.version=0.8.1
> >
> >
> >
> > My understanding is that we should get unsupported message format with
> > above configurations, why do we get invalid_request?
> >
> >
> > Thanks,
> >
> > Vignesh.
> >
> >
> >
> >
> > On Wed, Sep 27, 2017 at 9:51 AM, Vignesh  wrote:
> >
> > > Hi,
> > >
> > > We are using LibrdKafka library version .11.0 and calling List Offset
> API
> > > with a timestamp on a 0.10.2 kafka server installed in a windows
> machine.
> > >
> > > This request returns an error code, 43 - INVALID_REQUEST.
> > >
> > > We have other local installations of Kafka version 0.10.2 (also on
> > > Windows) and are able to use the library successfully.
> > >
> > > Are there any settings on this specific server that is causing this
> > error?
> > > Which logs can we enable and look at to get additional details about
> what
> > > is wrong with the request?
> > >
> > > Thanks,
> > > Vignesh.
> > >
> >
>


Kafka & ZooKeeper IDs

2018-03-26 Thread Vignesh
Hello Kafka Users,



I'm setting up 3 node kafka cluster in RHEL



I don't see a documentation for what ID these services should be running
under.

I'm creating unique Id's for Kafka & Zookeeper .. each on their unique
nodes.



Does it has any restriction or any mandatory IDs it should be running
under? Like "mqm" for IBM MQ.



Thanks,

Vignesh


Re: Kafka & ZooKeeper IDs

2018-03-26 Thread Vignesh
Hi Anand, Thanks..
I was asking about linux/unix ids these kafka & zookeeper processes are
running.

On Mon, Mar 26, 2018 at 11:16 AM, Anand, Uttam  wrote:

> Hi Vigensh,
>
>
>
> There is no restriction on ID. It can be any unique number.
>
>
>
> broker.id=0
>
> broker.id=1
>
> broker.id=2
>
>
>
> Thanks
>
> Uttam
>
>
>
> -Original Message-
> From: Vignesh [mailto:davidviki...@gmail.com]
> Sent: Monday, March 26, 2018 10:31 AM
> To: users@kafka.apache.org
> Subject: Kafka & ZooKeeper IDs
>
>
>
> EXTERNAL
> EMAIL Hello Kafka Users,
>
>
>
>
>
>
>
> I'm setting up 3 node kafka cluster in RHEL
>
>
>
>
>
>
>
> I don't see a documentation for what ID these services should be running
> under.
>
>
>
> I'm creating unique Id's for Kafka & Zookeeper .. each on their unique
> nodes.
>
>
>
>
>
>
>
> Does it has any restriction or any mandatory IDs it should be running
> under? Like "mqm" for IBM MQ.
>
>
>
>
>
>
>
> Thanks,
>
>
>
> Vignesh
>


Re: Kafka & ZooKeeper IDs

2018-03-26 Thread Vignesh
Hi Christophe,

No, not the PIDs... UNIX IDs.
Are they(kafka & zookeeper) suppose to run under root ? or can i define my
own Ids?

Thanks,
Vignesh

On Mon, Mar 26, 2018 at 4:09 PM, Christophe Schmitz <
christo...@instaclustr.com> wrote:

> Hi Vignesh,
>
> Are you talking about the PID (process ID)? If so, those are automatically
> allocated by the kernel and you don't have to worry about.
>
> Cheers,
>
> Christophe
>
> On 27 March 2018 at 03:29, Vignesh  wrote:
>
> > Hi Anand, Thanks..
> > I was asking about linux/unix ids these kafka & zookeeper processes are
> > running.
> >
> > On Mon, Mar 26, 2018 at 11:16 AM, Anand, Uttam 
> > wrote:
> >
> > > Hi Vigensh,
> > >
> > >
> > >
> > > There is no restriction on ID. It can be any unique number.
> > >
> > >
> > >
> > > broker.id=0
> > >
> > > broker.id=1
> > >
> > > broker.id=2
> > >
> > >
> > >
> > > Thanks
> > >
> > > Uttam
> > >
> > >
> > >
> > > -Original Message-
> > > From: Vignesh [mailto:davidviki...@gmail.com]
> > > Sent: Monday, March 26, 2018 10:31 AM
> > > To: users@kafka.apache.org
> > > Subject: Kafka & ZooKeeper IDs
> > >
> > >
> > >
> > > EXTERNAL
> > > EMAIL Hello Kafka Users,
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > I'm setting up 3 node kafka cluster in RHEL
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > I don't see a documentation for what ID these services should be
> running
> > > under.
> > >
> > >
> > >
> > > I'm creating unique Id's for Kafka & Zookeeper .. each on their unique
> > > nodes.
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > Does it has any restriction or any mandatory IDs it should be running
> > > under? Like "mqm" for IBM MQ.
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > Thanks,
> > >
> > >
> > >
> > > Vignesh
> > >
> >
>
>
>
> --
>
> *Christophe Schmitz - **VP Consulting*
>
> AU: +61 4 03751980 / FR: +33 7 82022899
>
> <https://www.facebook.com/instaclustr>   <https://twitter.com/instaclustr>
> <https://www.linkedin.com/company/instaclustr>
>
> Read our latest technical blog posts here
> <https://www.instaclustr.com/blog/>. This email has been sent on behalf
> of Instaclustr Pty. Limited (Australia) and Instaclustr Inc (USA). This
> email and any attachments may contain confidential and legally
> privileged information.  If you are not the intended recipient, do not copy
> or disclose its content, but please reply to this email immediately and
> highlight the error to the sender and then immediately delete the message.
>


Re: Kafka & ZooKeeper IDs

2018-03-27 Thread Vignesh
Hi Chritophe,

Thank you...This helps!!
 It is User IDs. I untarred and set appropriate permissions.

-
Vignesh

On Mon, Mar 26, 2018 at 5:32 PM, Christophe Schmitz <
christo...@instaclustr.com> wrote:

> Hi Vignesh
>
> I am not sure what you mean by UNIX IDs is. Do you mean User ID (UID) which
> is the ID (number) of a given user? If so, there is no real requirement.
> You can run as root, or you can run as a user you create, provided that it
> has read and write access to a set of directories (data, logs etc...). Best
> security practice would say that you shouldn't run as root.
> Depending on how you installed Kafka and Zookeeper, it could be that the
> install packager already created a corresponding user (kafka, zookeeper)
> and already set the appropriate permissions to the directory it should
> write to. If you just untarred a binary archive, then it will be up to you
> to set the permissions.
>
> Hope it helps!
>
> Cheers,
>
> Christophe
>
>
> On 27 March 2018 at 08:55, Vignesh  wrote:
>
> > Hi Christophe,
> >
> > No, not the PIDs... UNIX IDs.
> > Are they(kafka & zookeeper) suppose to run under root ? or can i define
> my
> > own Ids?
> >
> > Thanks,
> > Vignesh
> >
> > On Mon, Mar 26, 2018 at 4:09 PM, Christophe Schmitz <
> > christo...@instaclustr.com> wrote:
> >
> > > Hi Vignesh,
> > >
> > > Are you talking about the PID (process ID)? If so, those are
> > automatically
> > > allocated by the kernel and you don't have to worry about.
> > >
> > > Cheers,
> > >
> > > Christophe
> > >
> > > On 27 March 2018 at 03:29, Vignesh  wrote:
> > >
> > > > Hi Anand, Thanks..
> > > > I was asking about linux/unix ids these kafka & zookeeper processes
> are
> > > > running.
> > > >
> > > > On Mon, Mar 26, 2018 at 11:16 AM, Anand, Uttam  >
> > > > wrote:
> > > >
> > > > > Hi Vigensh,
> > > > >
> > > > >
> > > > >
> > > > > There is no restriction on ID. It can be any unique number.
> > > > >
> > > > >
> > > > >
> > > > > broker.id=0
> > > > >
> > > > > broker.id=1
> > > > >
> > > > > broker.id=2
> > > > >
> > > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > Uttam
> > > > >
> > > > >
> > > > >
> > > > > -Original Message-
> > > > > From: Vignesh [mailto:davidviki...@gmail.com]
> > > > > Sent: Monday, March 26, 2018 10:31 AM
> > > > > To: users@kafka.apache.org
> > > > > Subject: Kafka & ZooKeeper IDs
> > > > >
> > > > >
> > > > >
> > > > > EXTERNAL
> > > > > EMAIL Hello Kafka Users,
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > I'm setting up 3 node kafka cluster in RHEL
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > I don't see a documentation for what ID these services should be
> > > running
> > > > > under.
> > > > >
> > > > >
> > > > >
> > > > > I'm creating unique Id's for Kafka & Zookeeper .. each on their
> > unique
> > > > > nodes.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Does it has any restriction or any mandatory IDs it should be
> running
> > > > > under? Like "mqm" for IBM MQ.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Thanks,
> > > > >
> > > > >
> > > > >
> > > > > Vignesh
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > >
> > > *Christophe Schmitz - **VP Consulting*
> > >
> > > AU: +61 4 03751980 / FR: +33 7 82022899
> > >
> > > <https://www.facebook.com/instaclustr>   <https://twitter.com/
> > instaclustr>
> > > <https://www.linkedin.com/company/instaclustr>
> > >
> > > Read our latest technical blog posts here
> > > <https://www.instaclustr.com/blog/>. This email has been sent on
> behalf
> > > of Instaclustr Pty. Limited (Australia) and Instaclustr Inc (USA). This
> > > email and any attachments may contain confidential and legally
> > > privileged information.  If you are not the intended recipient, do not
> > copy
> > > or disclose its content, but please reply to this email immediately and
> > > highlight the error to the sender and then immediately delete the
> > message.
> > >
> >
>
>
>
> --
>
> *Christophe Schmitz - **VP Consulting*
>
> AU: +61 4 03751980 / FR: +33 7 82022899
>
> <https://www.facebook.com/instaclustr>   <https://twitter.com/instaclustr>
> <https://www.linkedin.com/company/instaclustr>
>
> Read our latest technical blog posts here
> <https://www.instaclustr.com/blog/>. This email has been sent on behalf
> of Instaclustr Pty. Limited (Australia) and Instaclustr Inc (USA). This
> email and any attachments may contain confidential and legally
> privileged information.  If you are not the intended recipient, do not copy
> or disclose its content, but please reply to this email immediately and
> highlight the error to the sender and then immediately delete the message.
>


Kafka Broker down - no space left on device

2018-06-28 Thread Vignesh
Hello kafka users,

How to recover a kafka broker from disk full ?

I updated the log retention period to 1 hour from 7 days.. but this would
take effect only when broker is started.

Any ways other than increasing the disk space?

Thanks,
Vignesh


Re: Kafka Broker down - no space left on device

2018-06-28 Thread Vignesh
Thanks.. It helped.. I deleted a few topics data log..

On Thu, Jun 28, 2018 at 4:51 PM, Zakee  wrote:

> It depends.
>
> You can clean up data folder before starting the broker as long as you
> have data replicated in other healthy brokers. When you start the broker
> with clean data folder, it will start catching up with replica leaders and
> eventually join in-sync replicas.  The catchup traffic impact the overall
> cluster perf based on the retention time. Since you already reduced
> retention to 1hour you should be good.
>
> Thanks
> Zakee
>
> > On Jun 28, 2018, at 2:36 PM, Vignesh  wrote:
> >
> > Hello kafka users,
> >
> > How to recover a kafka broker from disk full ?
> >
> > I updated the log retention period to 1 hour from 7 days.. but this would
> > take effect only when broker is started.
> >
> > Any ways other than increasing the disk space?
> >
> > Thanks,
> > Vignesh
>
> 
> World War 2 Discovery Kept Secret For Over 70 Years?!
> pro.naturalhealthresponse.com
> http://thirdpartyoffers.netzero.net/TGL3231/5b35584f239db584d3266st04duc


Topic - "Summed Recent offsets" property question

2018-10-04 Thread Vignesh
Hi Kafka Users,

When does a "Summed recent Offsets" get reset in a topic ?
I have a Topic "test" on which this value didn't change over time even
after it passed the retention hours.

Also this topic has a stale consumer group "sub1" ..I'm unable to delete it.

*kafka-consumer-groups.sh --bootstrap-server hostname:port --describe
--group sub1*
*Consumer group 'sub1' has no active members.*

*kafka-consumer-groups.sh --bootstrap-server hostname:port --delete --group
sub1*
*Option '[delete]' is only valid with '[zookeeper]'.*

*Note that there's no need to delete group metadata for the new consumer as
the group is deleted when the last committed offset for that group expires.*

Also when i try to display my consumer details using zookeeper , It tells
consumer "sub1" not available.

*kafka-consumer-groups.sh --zookeeper hostname:port --describe --group sub1*
*Note: This will only show information about consumers that use ZooKeeper
(not those using the Java consumer API).*
*Error: The consumer group 'sub1' does not exist. *

Thanks,
Vignesh


upgrading kafka & zookeeper

2019-08-05 Thread Vignesh
Hi kafka Users,

I have a novice question in kafka upgrade.. This is the 1st time i'm
upgrading my kafka in Linux.

My current version is "kafka_2.11-1.0.0.tgz".. when i initially setup i had
a folder named kafka_2.11-1.0.0.

Now i downloaded a new version "kafka_2.12-2.3.0.tgz". If i extract it is
going to create a new folder kafka_2.12-2.3.0 which will result in 2
independent kafka with server.properties.

As per documentation i have to update server.properties with below 2
properties..

inter.broker.protocol.version=2.3
log.message.format.version=2.3

How does this affect if it is going to install in a new directory with new
server.properties ?

How can i merge server.properties & do the upgrade ? please share if you
have documents or steps..



Thanks,
Vignesh


Kafka Source Connector to Event Streams

2020-02-04 Thread Vignesh
Hi,

I doing a proof of concept to read messages from IBM MQ queue in on-perm
and write it to Event Stream topic in bluemix.

I few questions,
To run Kafka connect , do i need a Kafka cluster or an instance running ?

I downloaded Kafka binaries and extracted in a linux box and following jar
is built and kept in libs folder of kafka installation..
kafka-connect-mq-source-1.2.0-SNAPSHOT-jar-with-dependencies.jar

Thanks,
David


Re: Kafka Source Connector to Event Streams

2020-02-04 Thread Vignesh
Thanks Andrew!


On Tue, Feb 4, 2020 at 2:59 PM Andrew Schofield 
wrote:

> Hi David,
> The instance of Event Streams is a Kafka cluster. You will need the Kafka
> Connect framework in order to provide an environment to run the connector
> that you built.
>
> Thanks,
> Andrew Schofield
>
> On 04/02/2020, 18:24, "Vignesh"  wrote:
>
> Hi,
>
> I doing a proof of concept to read messages from IBM MQ queue in
> on-perm
> and write it to Event Stream topic in bluemix.
>
> I few questions,
> To run Kafka connect , do i need a Kafka cluster or an instance
> running ?
>
> I downloaded Kafka binaries and extracted in a linux box and following
> jar
> is built and kept in libs folder of kafka installation..
> kafka-connect-mq-source-1.2.0-SNAPSHOT-jar-with-dependencies.jar
>
> Thanks,
> David
>
>
>


Kafka Connect - REST API https

2020-08-01 Thread Vignesh
Hello Kafka Users,

I'm trying to secure the REST end point in kafka connect with https ..

Below is my config in connect.distributed.properties file,

I created self signed certificate in my linux VM,
**
listeners=https://myhostname.x.xx.com:8085

listeners.https.ssl.keystore.location=/home/kafka/server.keystore.jks
listeners.https.ssl.keystore.password=***
listeners.https.ssl.key.password=**
listeners.https.ssl.truststore.location=/home/kafka/server.truststore.jks
listeners.https.ssl.truststore.password=**
listeners.https.ssl.client.auth=required
  **

In connect.log , i see my listeners specific ssl properties are ignored as
unknown config.. and when i try to open my rest endpoint in web browser
using https , i get ssl error.

Error in web browser,

*myhostname.x.xx.com <http://myhostname.x.xx.com>* uses an
unsupported protocol.
ERR_SSL_VERSION_OR_CIPHER_MISMATCH

Error in connect.log,


*[2020-08-01 17:36:42,705] WARN The configuration
'listeners.https.ssl.keystore.location' was supplied but isn't a known
config. (org.apache.kafka.clients.admin.AdminClientConfig:355)*

Any thoughts/suggestions on what i'm missing here??

Thanks !!!

-
Vignesh


Kafka Connect REST HTTPS

2020-08-05 Thread Vignesh
Hi ,

Has anyone tested https REST in Kafka Connect ? Do we have a working
version ?

I'm using Apache Kafka Version 2.4 just using Kafka Connect alone.

ssl* properties are prefixed with listeners.https.* and all these
properties are ignored as "unknown configuration" in my connect.log file.

Please advise.

Thanks,
Vignesh


Re: Internal Connect REST endpoints are insecure

2021-09-23 Thread Vignesh
I worked on this space.. But I didn't take *https options within kafka
connect instead I deployed kafka connect in the kubernetes cluster so I
leveraged Ingress exposed https to allow clients access to my kafka connect
rest api.

Thanks,
Vignesh

On Fri, Sep 17, 2021 at 10:31 AM Kuchansky, Valeri 
wrote:

> Hi Community Members,
>
> I am  following  available documents to have kafka-connect REST  API
> secured.
> In particular this one<
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-208%3A+Add+SSL+support+to+Kafka+Connect+REST+interface
> >.
> I do not see that use any of listeners.https.ssl.* options make any
> difference.
> I would appreciate any help in creating of a valid configuration.
> My running Kafka version is 2.5.
>
> Thanks,
> Valeri
>
> Notice: This e-mail together with any attachments may contain information
> of Ribbon Communications Inc. and its Affiliates that is confidential
> and/or proprietary for the sole use of the intended recipient. Any review,
> disclosure, reliance or distribution by others or forwarding without
> express permission is strictly prohibited. If you are not the intended
> recipient, please notify the sender immediately and then delete all copies,
> including any attachments.


Kafka Connect - REST API security

2022-05-13 Thread Vignesh
Hello  ,

How can we enable authentication for the kafka connect REST API ? I see
documentation to setup  basic one using jaas_config file. Can we integrate
ldap ?

REST calls are accessible and viewable to anyone in the same network.

I see we don't have one today as mentioned on the issue below.
https://github.com/strimzi/strimzi-kafka-operator/issues/3229

please advise.

Thanks,
Vignesh


Does offsetsForTimes use createtime of logsegment file?

2017-01-05 Thread Vignesh
Hi,

offsetsForTimes
<https://kafka.apache.org/0101/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#offsetsForTimes(java.util.Map)>
function
returns offset for a given timestamp. Does it use message's timestamp
(which could be LogAppendTime or set by user) or creation time of
logsegment file?


KIP-33
<https://cwiki.apache.org/confluence/display/KAFKA/KIP-33+-+Add+a+time+based+log+index>
adds timestamp based index, and it is available only from 0.10.1 . Does
 above function work on 0.10.0 ? If so, are there any differences in how it
works between versions 0.10.0 and 0.10.1 ?

Thanks,
Vignesh.


Re: Does offsetsForTimes use createtime of logsegment file?

2017-01-05 Thread Vignesh
Thanks. I didn't realize ListOffsetRequestV1 is only available 0.10.1
(which has KIP-33, time index).
When timestamp is set by user (CreationTime), and it is not always
increasing, would this method still return the offset of first message with
timestamp greater than equal to the provided timestamp?


For example, in below scenario

Message1, Timestamp = T1, Offset = 0
Message2, Timestamp = T0 (or T2), Offset = 1
Message3, Timestamp = T1, Offset = 2


Would offsetForTimestamp(T1) return offset for earliest message with
timestamp T1 (i.e. Offset 0 in above example) ?


-Vignesh.

On Thu, Jan 5, 2017 at 8:19 PM, Ewen Cheslack-Postava 
wrote:

> On Wed, Jan 4, 2017 at 11:54 PM, Vignesh  wrote:
>
> > Hi,
> >
> > offsetsForTimes
> > <https://kafka.apache.org/0101/javadoc/org/apache/kafka/
> clients/consumer/
> > KafkaConsumer.html#offsetsForTimes(java.util.Map)>
> > function
> > returns offset for a given timestamp. Does it use message's timestamp
> > (which could be LogAppendTime or set by user) or creation time of
> > logsegment file?
> >
> >
> This is actually tied to how the ListOffsetsRequest is handled. But if
> you're on a recent version, then the KIP
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65868090
> made it use the more accurate version based on message timestamps.
>
>
> >
> > KIP-33
> > <https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 33+-+Add+a+time+based+log+index>
> > adds timestamp based index, and it is available only from 0.10.1 . Does
> >  above function work on 0.10.0 ? If so, are there any differences in how
> it
> > works between versions 0.10.0 and 0.10.1 ?
> >
> >
> The KIP was only adopted and implemented in 0.10.1+. It is not available in
> 0.10.0.
>
>
> > Thanks,
> > Vignesh.
> >
>


Re: Does offsetsForTimes use createtime of logsegment file?

2017-01-19 Thread Vignesh
Another question, with getOffsetsBefore, we used to be able to get offsets
for time in older versions.
.10 doesn't have an equivalent method.

Is there any other way to achieve the same functionality as getOffsetsBefore
in .10 ? Does a .10 server respond to ListOffsetRequestV0 request?


On Fri, Jan 6, 2017 at 1:26 PM, Ewen Cheslack-Postava 
wrote:

> It would return the earlier one, offset 0.
>
> -Ewen
>
> On Thu, Jan 5, 2017 at 10:15 PM, Vignesh  wrote:
>
> > Thanks. I didn't realize ListOffsetRequestV1 is only available 0.10.1
> > (which has KIP-33, time index).
> > When timestamp is set by user (CreationTime), and it is not always
> > increasing, would this method still return the offset of first message
> with
> > timestamp greater than equal to the provided timestamp?
> >
> >
> > For example, in below scenario
> >
> > Message1, Timestamp = T1, Offset = 0
> > Message2, Timestamp = T0 (or T2), Offset = 1
> > Message3, Timestamp = T1, Offset = 2
> >
> >
> > Would offsetForTimestamp(T1) return offset for earliest message with
> > timestamp T1 (i.e. Offset 0 in above example) ?
> >
> >
> > -Vignesh.
> >
> > On Thu, Jan 5, 2017 at 8:19 PM, Ewen Cheslack-Postava  >
> > wrote:
> >
> > > On Wed, Jan 4, 2017 at 11:54 PM, Vignesh 
> wrote:
> > >
> > > > Hi,
> > > >
> > > > offsetsForTimes
> > > > <https://kafka.apache.org/0101/javadoc/org/apache/kafka/
> > > clients/consumer/
> > > > KafkaConsumer.html#offsetsForTimes(java.util.Map)>
> > > > function
> > > > returns offset for a given timestamp. Does it use message's timestamp
> > > > (which could be LogAppendTime or set by user) or creation time of
> > > > logsegment file?
> > > >
> > > >
> > > This is actually tied to how the ListOffsetsRequest is handled. But if
> > > you're on a recent version, then the KIP
> > > https://cwiki.apache.org/confluence/pages/viewpage.
> > action?pageId=65868090
> > > made it use the more accurate version based on message timestamps.
> > >
> > >
> > > >
> > > > KIP-33
> > > > <https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 33+-+Add+a+time+based+log+index>
> > > > adds timestamp based index, and it is available only from 0.10.1 .
> Does
> > > >  above function work on 0.10.0 ? If so, are there any differences in
> > how
> > > it
> > > > works between versions 0.10.0 and 0.10.1 ?
> > > >
> > > >
> > > The KIP was only adopted and implemented in 0.10.1+. It is not
> available
> > in
> > > 0.10.0.
> > >
> > >
> > > > Thanks,
> > > > Vignesh.
> > > >
> > >
> >
>


Re: Kafka Connect on Kubernetes: Statefulset vs Deployment

2025-06-15 Thread Vignesh
Kafka Connect is a stateless component by design. It relies on external
Kafka topics to persist its state, including connector configurations,
offsets, and status updates. In a distributed Kafka Connect cluster, this
state is managed through the following configurable topics:

   -

   config.storage.topic – stores connector configurations
   -

   offset.storage.topic – stores source connector offsets
   -

   status.storage.topic – stores the status of connectors and tasks

Because Kafka Connect does not maintain any state locally, it is not
dependent on a specific IP address or hostname. As a result, it is best to
deploy Kafka Connect using a *Kubernetes Deployment* rather than a
*StatefulSet*, since Deployments are better suited for stateless
applications and provide more flexibility with scaling and rolling updates.

Additionally, it is common practice to expose the Kafka Connect REST API
via an *Ingress*, allowing external systems to submit and manage connectors.
We have deployed several instances of this as deployment for our use case
from below repo - FYR
https://github.com/ibm-messaging/kafka-connect-mq-source

Thanks,
Vignesh

On Sun, Jun 15, 2025 at 12:12 AM Prateek Kohli 
wrote:

> Hi All,
>
> I'm building a custom Docker image for kafka Connect and planning to run it
> on Kubernetes. I'm a bit stuck on whether I should use a Deployment or a
> StatefulSet.
>
> From what I understand, the main difference that could affect Kafka Connect
> is the hostname/IP behaviour. With a Deployment, pod IPs and hostnames can
> change after restarts. With a StatefulSet, each pod gets a stable hostname
> (like connect-0, connect-1, etc.)
>
> My question is: Does it really matter for Kafka Connect if the pod
> IPs/hostname change, considering its a stateless application?
>
> Thanks
>