Info regarding kafka topic

2017-06-08 Thread BigData dev
Hi,
I have a 3 node Kafka Broker cluster.
I have created a topic and the leader for the topic is broker 1(1001). And
the broker got died.
But when I see the information in zookeeper for the topic, I see the leader
is still set to broker 1 (1001) and isr is set to 1001. Is this a bug in
kafka, as now leader is died, the leader should have set to none.

*[zk: localhost:2181(CONNECTED) 7] get
/brokers/topics/t3/partitions/0/state*

*{"controller_epoch":1,"leader":1001,"version":1,"leader_epoch":1,"isr":[1001]}*

*cZxid = 0x10078*

*ctime = Thu Jun 08 14:50:07 PDT 2017*

*mZxid = 0x1008c*

*mtime = Thu Jun 08 14:51:09 PDT 2017*

*pZxid = 0x10078*

*cversion = 0*

*dataVersion = 1*

*aclVersion = 0*

*ephemeralOwner = 0x0*

*dataLength = 78*

*numChildren = 0*

*[zk: localhost:2181(CONNECTED) 8] *


And when I use describe command the output is

*[root@meets2 kafka-broker]# bin/kafka-topics.sh --describe --topic t3
--zookeeper localhost:2181*

*Topic:t3 PartitionCount:1 ReplicationFactor:2 Configs:*

*Topic: t3 Partition: 0 Leader: 1001 Replicas: 1001,1003 Isr: 1001*


When I use unavailable-partition option, I can know correctly.

*[root@meets2 kafka-broker]# bin/kafka-topics.sh --describe --topic t3
--zookeeper localhost:2181 --unavailable-partitions*

* Topic: t3 Partition: 0 Leader: 1001 Replicas: 1001,1003 Isr: 1001*


But in zookeeper topic state, the leader should have been set to none, not
the actual leader when the broker has died. Is this according to design or
is it a bug in Kafka. Could you please provide any information on this?


*Thanks,*

*Bharat*


users@kafka.apache.org

2016-04-12 Thread BigData dev
Hi All,
I am facing issue with kafka kerberoized cluster.

After following the steps how to enables SASL on kafka by using below link.
http://docs.confluent.io/2.0.0/kafka/sasl.html



After this,when i start the kafka-server I am getting below error.
[2016-04-12 16:59:26,201] ERROR [KafkaApi-1001] error when handling request
Name:LeaderAndIsrRequest;Version:0;Controller:1001;ControllerEpoch:3;CorrelationId:3;ClientId:1001;Leaders:BrokerEndPoint(1001,
hostname.com,6667);PartitionState:(t1,0) ->
(LeaderAndIsrInfo:(Leader:1001,ISR:1001,LeaderEpoch:1,ControllerEpoch:3),ReplicationFactor:1),AllReplicas:1001),(ambari_kafka_service_check,0)
->
(LeaderAndIsrInfo:(Leader:1001,ISR:1001,LeaderEpoch:2,ControllerEpoch:3),ReplicationFactor:1),AllReplicas:1001)
(kafka.server.KafkaApis)
kafka.common.ClusterAuthorizationException: Request
Request(0,9.30.150.20:6667-9.30.150.20:37550,Session(User:kafka,/9.30.150.20),null,1460505566200,SASL_PLAINTEXT)
is not authorized.
at kafka.server.KafkaApis.authorizeClusterAction(KafkaApis.scala:910)
at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:113)
at kafka.server.KafkaApis.handle(KafkaApis.scala:72)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)


Regards,
Bharat


users@kafka.apache.org

2016-04-14 Thread BigData dev
Hi Ismael,
Thank you for providing link.
I was able to resolve the issue.


Regards,
Bharat


On Wed, Apr 13, 2016 at 1:41 AM, Ismael Juma  wrote:

> Hi Bharat,
>
> It looks authorization is not configured correctly. I suggest taking a look
> at the following blog post that configures authentication and authorization
> and includes a vagrant setup that you can test:
>
>
> http://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption
>
> Ismael
>
> On Wed, Apr 13, 2016 at 1:06 AM, BigData dev 
> wrote:
>
> > Hi All,
> > I am facing issue with kafka kerberoized cluster.
> >
> > After following the steps how to enables SASL on kafka by using below
> link.
> > http://docs.confluent.io/2.0.0/kafka/sasl.html
> >
> >
> >
> > After this,when i start the kafka-server I am getting below error.
> > [2016-04-12 16:59:26,201] ERROR [KafkaApi-1001] error when handling
> request
> >
> >
> Name:LeaderAndIsrRequest;Version:0;Controller:1001;ControllerEpoch:3;CorrelationId:3;ClientId:1001;Leaders:BrokerEndPoint(1001,
> > hostname.com,6667);PartitionState:(t1,0) ->
> >
> >
> (LeaderAndIsrInfo:(Leader:1001,ISR:1001,LeaderEpoch:1,ControllerEpoch:3),ReplicationFactor:1),AllReplicas:1001),(ambari_kafka_service_check,0)
> > ->
> >
> >
> (LeaderAndIsrInfo:(Leader:1001,ISR:1001,LeaderEpoch:2,ControllerEpoch:3),ReplicationFactor:1),AllReplicas:1001)
> > (kafka.server.KafkaApis)
> > kafka.common.ClusterAuthorizationException: Request
> > Request(0,9.30.150.20:6667-9.30.150.20:37550,Session(User:kafka,/
> > 9.30.150.20),null,1460505566200,SASL_PLAINTEXT)
> > is not authorized.
> > at kafka.server.KafkaApis.authorizeClusterAction(KafkaApis.scala:910)
> > at
> > kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:113)
> > at kafka.server.KafkaApis.handle(KafkaApis.scala:72)
> > at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Regards,
> > Bharat
> >
>


Reg: Kafka With Kerberos/SSL [Enhancement to add option, Need suggestions on this]

2016-04-14 Thread BigData dev
Hi All,
When Kafka is running on kerberoized cluster/ SSL. Can we add an option
security.protocol. So, that user can given PLAINTEXT, SSL, SASL_PLAINTEXT,
SASL_SSL. This will be helpful during running console producer and console
consumer.

./bin/kafka-console-producer.sh --broker-list  --topic  --security-protocol SASL_PLAINTEXT


./bin/kafka-console-consumer.sh --zookeeper c6401.ambari.apache.org:2181
--topic test_topic --from-beginning --security-protocol
SASL_PLAINTEXT



*Currently, this property can be configured in producer.properties when
running console-producer. And consumer.properties when running
cosole-consumer.*

Can we add as an option to user during running from console with other
options like topic, broker-list.

Can you please provide your thoughts on this, if it is useful to go in as a
feature or not. If it is useful i will create a jira and work on it.


*Note: This is the way HDP stack is working with kafka 0.9.0.. Link is
provided below.*

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_secure-kafka-ambari/content/ch_secure-kafka-produce-events.html




Regards,
Bharat


Re: kafka-console-consumer using which API?

2016-04-25 Thread BigData dev
By default it uses old conusmer.
When you specify --new-consumer it uses new consumer.


Regards,
Bharat


On Mon, Apr 25, 2016 at 1:07 PM, Ramanan, Buvana (Nokia - US) <
buvana.rama...@nokia.com> wrote:

> Does kafka-console-consumer.sh utilize New Consumer API or Old Consumer
> API?
>
>


Reg: Kafka-Acls

2016-05-05 Thread BigData dev
Hi,
When I run the command
 /bin/kafka-acls.sh --topic permissiontopic --add --allow-host {host}
--allow-principal User:dev --operation Write --authorizer-properties
zookeeper.connect={host:port}

I am getting output as acls are set.

But when i check under zookeeper using below command, it is not showing the
acls which I have set for user dev.

[zk: (CONNECTED) 13] getAcl /kafka-acl/Topic/permissiontopic
'world,'anyone
: r
'sasl,'kafka
: cdrwa

Is my understanding correct kafka-acls will be written to zookeeper node.


This is causing when i run producer, it is failing as topic authorization
failed.

If any one has used this, can you please provide the inputs

Regards,
Bharat


Re: [DISCUSS] Java 8 as a minimum requirement

2016-06-16 Thread BigData dev
+1.


Thanks,
Bharat

On Thu, Jun 16, 2016 at 1:59 PM, Adam Kunicki  wrote:

> +1
>
> Adam Kunicki
> StreamSets | Field Engineer
> mobile: 415.890.DATA (3282) | linkedin
> <
> https://mailtrack.io/trace/link/3e560367e0508b2f285512f39bd070275e70f571?url=http%3A%2F%2Fwww.adamkunicki.com&signature=aabcc9d816de2753
> >
>
> On Thu, Jun 16, 2016 at 1:56 PM, Craig Swift <
> craig.sw...@returnpath.com.invalid> wrote:
>
> > +1
> >
> > Craig J. Swift
> > Principal Software Engineer - Data Pipeline
> > ReturnPath Inc.
> > Work: 303-999-3220 Cell: 720-560-7038
> >
> > On Thu, Jun 16, 2016 at 2:50 PM, Henry Cai 
> > wrote:
> >
> > > +1 for Lambda expression.
> > >
> > > On Thu, Jun 16, 2016 at 1:48 PM, Rajiv Kurian 
> > wrote:
> > >
> > > > +1
> > > >
> > > > On Thu, Jun 16, 2016 at 1:45 PM, Ismael Juma 
> > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I would like to start a discussion on making Java 8 a minimum
> > > requirement
> > > > > for Kafka's next feature release (let's say Kafka 0.10.1.0 for
> now).
> > > This
> > > > > is the first discussion on the topic so the idea is to understand
> how
> > > > > people feel about it. If people feel it's too soon, then we can
> pick
> > up
> > > > the
> > > > > conversation again after Kafka 0.10.1.0. If the feedback is mostly
> > > > > positive, I will start a vote thread.
> > > > >
> > > > > Let's start with some dates. Java 7 hasn't received public updates
> > > since
> > > > > April 2015[1], Java 8 was released in March 2014[2] and Java 9 is
> > > > scheduled
> > > > > to be released in March 2017[3].
> > > > >
> > > > > The first argument for dropping support for Java 7 is that the last
> > > > public
> > > > > release by Oracle contains a large number of known security
> > > > > vulnerabilities. The effectiveness of Kafka's security features is
> > > > reduced
> > > > > if the underlying runtime is not itself secure.
> > > > >
> > > > > The second argument for moving to Java 8 is that it adds a number
> of
> > > > > compelling features:
> > > > >
> > > > > * Lambda expressions and method references (particularly useful for
> > the
> > > > > Kafka Streams DSL)
> > > > > * Default methods (very useful for maintaining compatibility when
> > > adding
> > > > > methods to interfaces)
> > > > > * java.util.stream (helpful for making collection transformations
> > more
> > > > > concise)
> > > > > * Lots of improvements to java.util.concurrent (CompletableFuture,
> > > > > DoubleAdder, DoubleAccumulator, StampedLock, LongAdder,
> > > LongAccumulator)
> > > > > * Other nice things: SplittableRandom, Optional (and many others I
> > have
> > > > not
> > > > > mentioned)
> > > > >
> > > > > The third argument is that it will simplify our testing matrix, we
> > > won't
> > > > > have to test with Java 7 any longer (this is particularly useful
> for
> > > > system
> > > > > tests that take hours to run). It will also make it easier to
> support
> > > > Scala
> > > > > 2.12, which requires Java 8.
> > > > >
> > > > > The fourth argument is that many other open-source projects have
> > taken
> > > > the
> > > > > leap already. Examples are Cassandra[4], Lucene[5], Akka[6], Hadoop
> > > 3[7],
> > > > > Jetty[8], Eclipse[9], IntelliJ[10] and many others[11]. Even
> Android
> > > will
> > > > > support Java 8 in the next version (although it will take a while
> > > before
> > > > > most phones will use that version sadly). This reduces (but does
> not
> > > > > eliminate) the chance that we would be the first project that would
> > > > cause a
> > > > > user to consider a Java upgrade.
> > > > >
> > > > > The main argument for not making the change is that a reasonable
> > number
> > > > of
> > > > > users may still be using Java 7 by the time Kafka 0.10.1.0 is
> > released.
> > > > > More specifically, we care about the subset who would be able to
> > > upgrade
> > > > to
> > > > > Kafka 0.10.1.0, but would not be able to upgrade the Java version.
> It
> > > > would
> > > > > be great if we could quantify this in some way.
> > > > >
> > > > > What do you think?
> > > > >
> > > > > Ismael
> > > > >
> > > > > [1] https://java.com/en/download/faq/java_7.xml
> > > > > [2]
> > https://blogs.oracle.com/thejavatutorials/entry/jdk_8_is_released
> > > > > [3] http://openjdk.java.net/projects/jdk9/
> > > > > [4] https://github.com/apache/cassandra/blob/trunk/README.asc
> > > > > [5]
> > > https://lucene.apache.org/#highlights-of-this-lucene-release-include
> > > > > [6] http://akka.io/news/2015/09/30/akka-2.4.0-released.html
> > > > > [7] https://issues.apache.org/jira/browse/HADOOP-11858
> > > > > [8] https://webtide.com/jetty-9-3-features/
> > > > > [9] http://markmail.org/message/l7s276y3xkga2eqf
> > > > > [10]
> > > > >
> > > > >
> > > >
> > >
> >
> https://intellij-support.jetbrains.com/hc/en-us/articles/206544879-Selecting-the-JDK-version-the-IDE-will-run-under
> > > > > [11] http://markmail.org/message/l7s276y3xkga2eqf
> > > > >
> > > >
> > >
> >
>


Reg: SSL setup

2016-08-03 Thread BigData dev
Hi,
Can you please provide information on Self signed certificate setup in
Kafka. As in Kafka documentation only CA signed setup is provided.

http://kafka.apache.org/documentation.html#security_ssl


As because, we need to provide parameters trustore, keystore during
configuration.

Or to work with self signed certificate, do we need to import all nodes
certificates to trustore on all machines?

Can you please provide information on this, if you have worked on this.


Thanks,
Bharat


Re: Kafka ACLs CLI Auth Error

2016-08-08 Thread BigData dev
Hi,
I think jaas config file need to be changed.

Client {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
   keyTab="/etc/security/keytabs/kafka.service.keytab"
   storeKey=true
   useTicketCache=false
   serviceName="zookeeper"
   principal="kafka/hostname.abc@abc.com";
};


You can follow the blog which provides complete steps for Kafka ACLS

https://developer.ibm.com/hadoop/2016/07/20/kafka-acls/



Thanks,

Bharat




On Mon, Aug 8, 2016 at 2:08 PM, Derar Alassi  wrote:

> Hi all,
>
> I have  3-node ZK and Kafka clusters. I have secured ZK with SASL. I got
> the keytabs done for my brokers and they can connect to the ZK ensemble
> just fine with no issues. All gravy!
>
> Now, I am trying to set ACLs using the kafka-acls.sh CLI. Before that, I
> did export the KAFKA_OPTS using the following command:
>
>
>  export  KAFKA_OPTS="-Djava.security.auth.login.config=/
> kafka_server_jaas.conf
> -Djavax.net.debug=all -Dsun.security.krb5.debug=true -Djavax.net.debug=all
> -Dsun.security.krb5.debug=true -Djava.security.krb5.conf= conf>/krb5.conf"
>
> I enabled extra debugging too. The JAAS file has the following info:
>
> KafkaServer {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/etc/+kafka.keytab"
> principal="kafka/@MY_DOMAIN";
> };
> Client {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> useTicketCache=true
> storeKey=true
> keyTab="/etc/+kafka.keytab"
> principal="kafka/@MY_DOMAIN";
> };
>
> Note that I enabled useTicketCache in the client section.
>
> I know that my krb5.conf file is good since the brokers are healthy and
> consumer/producers are able to do their work.
>
> Two scenarios:
>
> 1. When I enabled the useTicketCache=true, I get the following error:
>
> *Aug 08, 2016 8:42:46 PM org.apache.zookeeper.ClientCnxn$SendThread
> startConnectWARNING: SASL configuration failed:
> javax.security.auth.login.LoginException: No key to store Will continue
> connection to Zookeeper server without SASL authentication, if Zookeeper
> server allows it.*
>
> I execute "kinit kafka/@ -k -t
> /etc/+kafka.keytab " on the same shell where I run the .sh CLI
> tool.
> 2. When I remove userTicketCache, I get the following error:
>
>
>
>
>
>
>
>
> *Aug 08, 2016 9:03:38 PM org.apache.zookeeper.ZooKeeper closeINFO: Session:
> 0x356621f18f70009 closedError while executing ACL command:
> org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode =
> NoAuth for /kafka-acl/TopicAug 08, 2016 9:03:38 PM
> org.apache.zookeeper.ClientCnxn$EventThread runINFO: EventThread shut
> downorg.I0Itec.zkclient.exception.ZkException:
> org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode =
> NoAuth for /kafka-acl/Topicat
> org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:68)*
>
>
> Here is the command I run to set the ACLs in all cases:
> ./bin/kafka-acls.sh --authorizer-properties zookeeper.connect=:
> 2181
> --add --allow-principal User:Bob --producer --topic ssl-topic
>
>
> I use Kafka 0.9.0.1. Note that I am using the same keytabs that my Brokers
> (Kafka services) are using.
>
>
> Any ideas what I am doing wrong or what I should do differently to get ACLs
> set?
>
> Thanks,
> Derar
>


Reg: DefaultParititioner in Kafka

2016-08-29 Thread BigData dev
Hi All,
In DefaultPartitioner implementation, when key is null, we get the
partition number by modulo of available partitions. Below is the code
snippet.

if (availablePartitions.size() > 0)
{ int part = Utils.toPositive(nextValue) % availablePartitions.size();
return availablePartitions.get(part).partition();
}
Where as when key is not null, we get the partition number by modulo of
total no og partitions.

return Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions;

As if some partitions are not available,then the producer will not be able
to publish message to that partition.

Should n't we do the same as by considering only available partitions?

https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/internals/DefaultPartitioner.java#L67

Could any help to clarify on this issue.


Thanks,
Bharat


Re: [ANNOUNCE] New committer: Jiangjie (Becket) Qin

2016-10-31 Thread BigData dev
Congratulations Becket!!



Thanks,
Bharat

On Mon, Oct 31, 2016 at 11:26 AM, Jun Rao  wrote:

> Congratulations, Jiangjie. Thanks for all your contributions to Kafka.
>
> Jun
>
> On Mon, Oct 31, 2016 at 10:35 AM, Joel Koshy  wrote:
>
> > The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to join as a
> > committer and we are pleased to announce that he has accepted!
> >
> > Becket has made significant contributions to Kafka over the last two
> years.
> > He has been deeply involved in a broad range of KIP discussions and has
> > contributed several major features to the project. He recently completed
> > the implementation of a series of improvements (KIP-31, KIP-32, KIP-33)
> to
> > Kafka’s message format that address a number of long-standing issues such
> > as avoiding server-side re-compression, better accuracy for time-based
> log
> > retention, log roll and time-based indexing of messages.
> >
> > Congratulations Becket! Thank you for your many contributions. We are
> > excited to have you on board as a committer and look forward to your
> > continued participation!
> >
> > Joel
> >
>


Reg: Kafka ACLS

2017-01-25 Thread BigData dev
Hi,
I have a question, can we use Kafka ACL's with only SASL/PLAIN mechanism.
Because after I enabled, still I am able to produce/consume from topics.

And one more observation is in kafka-_jaas.conf, there is no client
section, will get an WARN as below, as we dont have this kind of mechanisim
with zookeeper.  Just want to confirm is this expected?

*WARN SASL configuration failed: javax.security.auth.login.LoginException:
No JAAS configuration section named 'Client' was found in specified JAAS
configuration file: '/usr/iop/current/kafka-broker/conf/kafka_jaas.conf'.
Will continue connection to Zookeeper server without SASL authentication,
if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn)*

KafkaClient {

org.apache.kafka.common.security.plain.PlainLoginModule required

username="alice"

password="alice-secret";

};


KafkaServer {

org.apache.kafka.common.security.plain.PlainLoginModule required

username="admin"

password="admin-secret"

user_admin="admin-secret"

user_alice="alice-secret";

};


I see recommended is SASL/PLAIN with SSL, just can we use only SASL/PLAIN
mechanisim with ACLS?

Thanks


Reg: Kafka Kerberos

2017-02-07 Thread BigData dev
Hi,
I am using Kafka 0.10.1.0 and kerberozied cluster.

Kafka_jaas.conf file:

Client {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
   keyTab="/etc/security/keytabs/kafka.service.keytab"
   storeKey=true
   useTicketCache=false
   serviceName="zookeeper"
   principal="kafka/h...@example.com";
};

If I change the keytab to user keytab (ex kafkatest) topic will be
created. (Creating topic using kafka console command). It is not
having any metadata information and leader assigned to it
(As kafka service user is not having access. because when i check
under zookeeper nodes it is having below permission for the topic
node)

getAcl /brokers/topicsuser-topic-test1
'world,'anyone
: r
'sasl,'kafkatest
: cdrwa


So, if i do setAcl /brokers/topics/user-topic-test1
world:anyone:r,sasl:kafkatest:cdrwa,sasl:kafka:cdrwa and then restart
kafka, the topic is having leader assigned to it.

So, is it mandatory for Client Section to use kafka service keytab or
add the keytab specified in the keyTab to super user to make it work?


Could any one please provide information on this.


Thanks


Reg: Kafka HDFS Connector with (HDFS SSL enabled)

2017-02-15 Thread BigData dev
Hi,

Does Kafka HDFS Connect work with HDFS (SSL). As I see only properties in
security is
hdfs.authentication.kerberos, connect.hdfs.keytab,hdfs.namenode.principal
as these properties are all related to HDFS Kerberos.

As from the configuration and code I see we pass only Kerberos parameters,
not seen SSL configuration, so want to confirm will the Kafka HDFS
Connector works with HDFS (SSL enabled)?

Could you please provide any information on this.


Thanks


Re: Reg: Kafka HDFS Connector with (HDFS SSL enabled)

2017-03-17 Thread BigData dev
Hi Colin,
I have configured SSL in HDFS and used SWebHDFS.
I am able to make it work with Kafka HDFS Connector.


Thanks,
Bharat


On Fri, Feb 17, 2017 at 1:47 PM, Colin McCabe  wrote:

> Hi,
>
> Just to be clear, HDFS doesn't use HTTP or HTTPS as its primary
> transport mechanism.  Instead, HDFS uses the Hadoop RPC transport
> mechanism.  So in general, it should not be necessary to configure SSL
> to connect a client to HDFS.
>
> HDFS does "support SSL" in the sense that the HDFS web UI can be
> configured to use it.  You can also use HttpFS or WebHDFS to access
> HDFS, which might motivate you to configure SSL.  Are you trying to
> configure one of these?
>
> best,
> Colin
>
>
> On Wed, Feb 15, 2017, at 11:03, BigData dev wrote:
> > Hi,
> >
> > Does Kafka HDFS Connect work with HDFS (SSL). As I see only properties in
> > security is
> > hdfs.authentication.kerberos, connect.hdfs.keytab,hdfs.
> namenode.principal
> > as these properties are all related to HDFS Kerberos.
> >
> > As from the configuration and code I see we pass only Kerberos
> > parameters,
> > not seen SSL configuration, so want to confirm will the Kafka HDFS
> > Connector works with HDFS (SSL enabled)?
> >
> > Could you please provide any information on this.
> >
> >
> > Thanks
>


Re: any way to manually assign listeners to kafka partitions ?

2017-03-25 Thread BigData dev
There is a method is consumer is used for getting all partitions for a topic
List

https://kafka.apache.org/0100/javadoc/org/apache/kafka/common/PartitionInfo.html>
> *partitionsFor
*
(String

 topic)

And next assign method can be used to assign the required partitions for
that consumer
void *assign
*
(Collection

https://kafka.apache.org/0100/javadoc/org/apache/kafka/common/TopicPartition.html>
> partitions)
So, in each thread you can use these methods, and do the needful.

Thanks,
Bharat

On Sat, Mar 25, 2017 at 10:35 AM, Laxmi Narayan 
wrote:

> Hi ,
> Is there anyway to get the list of all paritions and assign them to
> individual java threads for constantly listening ?
>
> Keep learning keep moving .
>


Need Permissions for creating KIP in Kafka

2017-05-05 Thread BigData dev
Hi,
Could you please provide permission for creating KIP in Kafka.

username: bharatv
email: bigdatadev...@gmail.com


[DISCUSS] KIP-156 Add option "dry run" to Streams application reset tool

2017-05-08 Thread BigData dev
Hi All,
I want to start a discussion on this simple KIP for Kafka Streams reset
tool (kafka-streams-application-reset.sh).
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=69410150

Thank you, Matthias J Sax for providing me Jira and info to work on.


Thanks,
Bharat


Re: producer and consumer sample code

2017-05-08 Thread BigData dev
Hi Adaryl,

There are samples of java producer/consumer in Apache Kafka source code
repo.

https://github.com/apache/kafka/tree/trunk/examples/src/main/java/kafka/examples


And also there is an IBM Hadoop Dev Blog to get started with Kafka. The
code sample is attached to the blog.
https://developer.ibm.com/hadoop/2016/06/23/getting-started-with-kafka-2/


Thanks,
Bharat


On Mon, May 8, 2017 at 5:34 PM, Adaryl Wakefield <
adaryl.wakefi...@hotmail.com> wrote:

> Does anybody have a really good example of some producers and consumers
> that they have written that they would be willing to share?
>
> Adaryl "Bob" Wakefield, MBA
> Principal
> Mass Street Analytics, LLC
> 913.938.6685
> www.massstreet.net
> www.linkedin.com/in/bobwakefieldmba
> Twitter: @BobLovesData
>


[VOTE] KIP-156 Add option "dry run" to Streams application reset tool

2017-05-09 Thread BigData dev
Hi, Everyone,

Since this is a relatively simple change, I would like to start the voting
process for KIP-156: Add option "dry run" to Streams application reset tool

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=69410150


The vote will run for a minimum of 72 hours.


Thanks,

Bharat


Re: [VOTE] KIP-156 Add option "dry run" to Streams application reset tool

2017-05-09 Thread BigData dev
Eno,
Got info from the JIRA all tools and their parameters are public API.
So, I have started voting for this KIP.

Thanks,
Bharat


On Tue, May 9, 2017 at 1:09 PM, Eno Thereska  wrote:

> +1 for me. I’m not sure we even need a KIP for this but it’s better to be
> safe I guess.
>
> Eno
>
> > On May 9, 2017, at 8:41 PM, BigData dev  wrote:
> >
> > Hi, Everyone,
> >
> > Since this is a relatively simple change, I would like to start the
> voting
> > process for KIP-156: Add option "dry run" to Streams application reset
> tool
> >
> > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=69410150
> >
> >
> > The vote will run for a minimum of 72 hours.
> >
> >
> > Thanks,
> >
> > Bharat
>
>


Re: [VOTE] KIP-156 Add option "dry run" to Streams application reset tool

2017-05-09 Thread BigData dev
Eno,
Got info from the JIRA all tools and their parameters are public API.
So, I have started voting for this KIP.

Thanks,
Bharat

On Tue, May 9, 2017 at 1:09 PM, Eno Thereska  wrote:

> +1 for me. I’m not sure we even need a KIP for this but it’s better to be
> safe I guess.
>
> Eno
>
> > On May 9, 2017, at 8:41 PM, BigData dev  wrote:
> >
> > Hi, Everyone,
> >
> > Since this is a relatively simple change, I would like to start the
> voting
> > process for KIP-156: Add option "dry run" to Streams application reset
> tool
> >
> > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=69410150
> >
> >
> > The vote will run for a minimum of 72 hours.
> >
> >
> > Thanks,
> >
> > Bharat
>
>


Re: [VOTE] KIP-156 Add option "dry run" to Streams application reset tool

2017-05-10 Thread BigData dev
Thanks everyone for voting.

KIP-156 has passed with +4 binding (Neha, Jay Kreps, Sriram Subramanian and
Gwen Shapira) and +3 non-binding (Eno Thereska, Matthias J. Sax and Bill
Bejeck)

Thanks,

Bharat Viswanadham


On Wed, May 10, 2017 at 9:46 AM, Sriram Subramanian 
wrote:

> +1
>
> On Wed, May 10, 2017 at 9:45 AM, Neha Narkhede  wrote:
>
> > +1
> >
> > On Wed, May 10, 2017 at 12:32 PM Gwen Shapira  wrote:
> >
> > > +1. Also not sure that adding a parameter to a CLI requires a KIP. It
> > seems
> > > excessive.
> > >
> > >
> > > On Tue, May 9, 2017 at 7:57 PM Jay Kreps  wrote:
> > >
> > > > +1
> > > > On Tue, May 9, 2017 at 3:41 PM BigData dev 
> > > > wrote:
> > > >
> > > > > Hi, Everyone,
> > > > >
> > > > > Since this is a relatively simple change, I would like to start the
> > > > voting
> > > > > process for KIP-156: Add option "dry run" to Streams application
> > reset
> > > > tool
> > > > >
> > > > >
> > > >
> > > https://cwiki.apache.org/confluence/pages/viewpage.
> > action?pageId=69410150
> > > > >
> > > > >
> > > > > The vote will run for a minimum of 72 hours.
> > > > >
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Bharat
> > > > >
> > > >
> > >
> > --
> > Thanks,
> > Neha
> >
>


Re: [VOTE] KIP-156 Add option "dry run" to Streams application reset tool

2017-05-10 Thread BigData dev
Thanks everyone for voting.

KIP-156 has passed with +4 binding (Neha, Jay Kreps, Sriram Subramanian and
Gwen Shapira) and +3 non-binding (Eno Thereska, Matthias J. Sax and Bill
Bejeck)

Thanks,

Bharat Viswanadham

On Wed, May 10, 2017 at 9:46 AM, Sriram Subramanian 
wrote:

> +1
>
> On Wed, May 10, 2017 at 9:45 AM, Neha Narkhede  wrote:
>
> > +1
> >
> > On Wed, May 10, 2017 at 12:32 PM Gwen Shapira  wrote:
> >
> > > +1. Also not sure that adding a parameter to a CLI requires a KIP. It
> > seems
> > > excessive.
> > >
> > >
> > > On Tue, May 9, 2017 at 7:57 PM Jay Kreps  wrote:
> > >
> > > > +1
> > > > On Tue, May 9, 2017 at 3:41 PM BigData dev 
> > > > wrote:
> > > >
> > > > > Hi, Everyone,
> > > > >
> > > > > Since this is a relatively simple change, I would like to start the
> > > > voting
> > > > > process for KIP-156: Add option "dry run" to Streams application
> > reset
> > > > tool
> > > > >
> > > > >
> > > >
> > > https://cwiki.apache.org/confluence/pages/viewpage.
> > action?pageId=69410150
> > > > >
> > > > >
> > > > > The vote will run for a minimum of 72 hours.
> > > > >
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Bharat
> > > > >
> > > >
> > >
> > --
> > Thanks,
> > Neha
> >
>


Reg: [VOTE] KIP 157 - Add consumer config options to streams reset tool

2017-05-15 Thread BigData dev
Hi All,
Given the simple and non-controversial nature of the KIP, I would like to
start the voting process for KIP-157: Add consumer config options to
streams reset tool

*https://cwiki.apache.org/confluence/display/KAFKA/KIP+157+-+Add+consumer+config+options+to+streams+reset+tool
*


The vote will run for a minimum of 72 hours.

Thanks,

Bharat


Re: `key.converter` per connector

2017-05-16 Thread BigData dev
Hi,
key.converter and value.converter can be overridden at connector level.
This has been supported  from Kafka 0.10.1.0

For more info refer to this KIP
https://cwiki.apache.org/confluence/display/KAFKA/KIP-75+-+Add+per-connector+Converters


Thanks,
Bharat


On Tue, May 16, 2017 at 1:39 AM, Nicolas Fouché  wrote:

> Hi,
>
> In distributed mode, can I set `key.converter` in a connector
> configuration. Some of my connectors use
> `org.apache.kafka.connect.storage.StringConverter` while others would need
> `io.confluent.connect.avro.AvroConverter`.
>
> So I wondered if `key.converter` could be overriden at the connector level.
>
> Thanks.
>