Please unsubscribe me from this list.
>
rs in java 8
Yes, we're using it in production, and no, no performance issues or bottlenecks
related to Kafka. All consumption/production has continued normally. Before
this we were on open jdk 7.
On Wed, Oct 29, 2014 at 2:41 PM, Seshadri, Balaji
wrote:
> Are you using it in production ?
27;ve been using clojure on open jdk 8 for my producers and consumers for about
a month now without any issue. Anything specific you're interested in?
Cheers,
Michael Nussbaum
On Wed, Oct 29, 2014 at 2:32 PM, Seshadri, Balaji
wrote:
> Have anybody used Kafka with Java 8 ?.
>
> --
Have anybody used Kafka with Java 8 ?.
-Original Message-
From: Seshadri, Balaji [mailto:balaji.sesha...@dish.com]
Sent: Wednesday, October 29, 2014 11:11 AM
To: 'd...@kafka.apache.org'; 'users@kafka.apache.org'
Subject: Apache Kafka Consumers in java 8
Hi All,
Hi All,
Can you please share your experiences of running Kafka Consumers/producers with
Java 8 ?.
Thanks,
Balaji
Hi Guys,
Can you guys share any experiences you had with Live Upgrade ?.
How reliable is it ?.Did we lose messages ?.
What issues did you guys face when doing live upgrade.
We are planning to upgrade to 0.8.2 from 0.8-beta before we move our web
methods broker based messaging layer to Kafka.
er: @allthingshadoop <http://www.twitter.com/allthingshadoop>
> ********/
>
> On Tue, Sep 30, 2014 at 4:57 PM, Seshadri, Balaji <
> balaji.sesha...@dish.com> wrote:
>
>> The zookeeper session timeout is 60 secs ,but that did not help.
>>
>> We are having broker cr
**
Joe Stein
Founder, Principal Consultant
Big Data Open Source Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
********/
On Tue, Sep 30, 2014 at 11:49 AM, Seshadri, Balaji wrote:
> Hi Joe,
In DISH we are having issues in 0.8-beta version used in PROD, it's crashing
every 2 days and becoming a blocker for us.
It would be great if we get 0.8.2 or 0.8.1.2 whichever is faster as we can't
wait for 3 weeks as our new Order Management system is going to sit on top of
Kafka.
-Origin
Founder, Principal Consultant
Big Data Open Source Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
/
On Tue, Sep 30, 2014 at 11:10 AM, Seshadri, Balaji wrote:
> I would love to help you guy
, September 29, 2014 5:21 PM
To: Seshadri, Balaji
Cc: users@kafka.apache.org
Subject: Re: BadVersion state in Kafka Logs
It is difficult to predict an exact date. Though all the discussions of the
progress and ETA are on the mailing list. You can follow the discussions to
know the details and/or
Neha,
Do you know the date in Oct when 0.8.2 is going to be out ?.
Thanks,
Balaji
From: Neha Narkhede [mailto:neha.narkh...@gmail.com]
Sent: Thursday, September 25, 2014 1:08 PM
To: Seshadri, Balaji
Cc: users@kafka.apache.org
Subject: Re: BadVersion state in Kafka Logs
We are close to the
Hi Neha,
Do you know when are you guys releasing 0.8.2 ?.
Thanks,
Balaji
-Original Message-
From: Seshadri, Balaji [mailto:balaji.sesha...@dish.com]
Sent: Thursday, September 25, 2014 9:41 AM
To: users@kafka.apache.org
Subject: RE: BadVersion state in Kafka Logs
Thanks for the replay
Logs
>From the logs you've attached, my guess is it's most likely due to
KAFKA-1382.
Thanks,
Neha
On Wed, Sep 24, 2014 at 10:48 AM, Seshadri, Balaji wrote:
> Hi,
>
>
>
> We got the below error in our logs and our consumers stopped consuming any
> data ?.It worked
Please find the log attached.
From: Seshadri, Balaji [mailto:balaji.sesha...@dish.com]
Sent: Wednesday, September 24, 2014 11:48 AM
To: 'users@kafka.apache.org'
Subject: BadVersion state in Kafka Logs
Hi,
We got the below error in our logs and our consumers stopped consuming any
Hi,
We got the below error in our logs and our consumers stopped consuming any data
?.It worked only after restart.
We would like to confirm that it's because we are running with 0.8-beta version
and not 0.8 release version to convince "THE MGMT" guys.
Please let me know if it's this KAFKA-138
We are planning for upgrading to 0.8.1.1 from 0.8-beta.
Can you please let us know the impacts of doing it ?.
I understand we have fixes for deadlocks in Consumer, maybe we should upgrade
the consumer.
But if we run with default offset storage we can still run old producers right.
Thanks,
Bal
PIs, even if it means multiple
consumers without threads?
Gwen
On Wed, Sep 3, 2014 at 3:06 PM, Seshadri, Balaji
wrote:
> We can still do with single ConsumerConnector with multiple threads.
>
> Each thread updates its own data in zookeeper.The below one is our own
> implementation
We can still do with single ConsumerConnector with multiple threads.
Each thread updates its own data in zookeeper.The below one is our own
implementation of commitOffset.
public void commitOffset(DESMetadata metaData) {
log.debug("Update offsets only for ->"+ metaData.toString()
jars. I would prepare and vote on it
if others would too.
/***
Joe Stein
Founder, Principal Consultant
Big Data Open Source Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop
/
On Aug 22, 2014 1:03 PM, &q
know if I can run
producers/consumers against a vanilla 0.8.1.1 broker cluster with it...
-Jonathan
On Aug 22, 2014, at 11:02 AM, Seshadri, Balaji wrote:
> Hi Team,
>
> We are trying to compile 0.8.1.1 with Scala 2.11 and its giving me
> compilation errors.
>
> Please let m
Hi Team,
We are trying to compile 0.8.1.1 with Scala 2.11 and its giving me compilation
errors.
Please let me know which patch should I apply from below JIRA.I tried with
latest one and it failed to apply.
https://issues.apache.org/jira/browse/KAFKA-1419
Thanks,
Balaji
We will work on upgrade.
Thanks,
Balaji
From: Jun Rao [mailto:jun...@gmail.com]
Sent: Monday, August 11, 2014 10:30 PM
To: Seshadri, Balaji
Cc: users@kafka.apache.org
Subject: Re: Kafka Consumer not consuming in webMethods.
0.8-beta is really old. Could you try using 0.8.1.1?
Thanks,
Jun
On
Any pointers would helpful.
From: Seshadri, Balaji
Sent: Monday, August 11, 2014 12:19 PM
To: 'jun@gmail.com'; 'neha.narkh...@gmail.com'; 'users@kafka.apache.org'
Subject: RE: Kafka Consumer not consuming in webMethods.
The offset checker does show lot of lag.
The offset checker does show lot of lag.
rain-raw-consumers1 rain-raw-listner 47 630138482
32181
rain-raw-consumers1_dm1mad06.echostar.com-1407776221959-74777cd2-3
From: Seshadri, Balaji
Sent: Monday, August 11, 2014 12:11 PM
To:
cDirs.consumerOffsetDir()+"/"+metaData.getPartitionNumber(),nextOffset+"");
checkPointedOffset.put(key,nextOffset);
}
}
Can you please review this ?.
-Original Message-
From: Seshadri, Balaji
Sent: Wednesday, April 23, 2014 12
/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
Thanks,
Jun
On Wed, Apr 23, 2014 at 9:01 AM, Seshadri, Balaji
wrote:
> I'm not seeing that API in java MessageAndMeta,is this part of
> ConsumerIterator.
>
>
> -Original Message-
> From: Jun Rao [mailto:j
]
Sent: Wednesday, April 23, 2014 10:14 AM
To: users@kafka.apache.org
Subject: Re: commitOffsets by partition 0.8-beta
Take a look at the example in
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
Thanks,
Jun
On Wed, Apr 23, 2014 at 9:01 AM, Seshadri, Balaji
hould be the offset of the next message to be
consumed. So, you should save mAndM.nextOffset().
Thanks,
Jun
On Tue, Apr 22, 2014 at 8:57 PM, Seshadri, Balaji
wrote:
> Yes I disabled it.
>
> My doubt is the path should have offset to be consumed or last
> consumed offset.
>
>
auto commit disabled?
Thanks,
Jun
On Tue, Apr 22, 2014 at 7:10 PM, Seshadri, Balaji
wrote:
> I'm updating the latest offset consumed to the zookeeper directory.
>
> Say for eg if my last consumed message has offset of 5 i update it in
> the path,but when i check zookeeper pa
uot;+metaData.getPartitionNumber(),metaData.getOffSet()+"");
}
Thanks,
Balaji
-Original Message-
From: Seshadri, Balaji [mailto:balaji.sesha...@dish.com]
Sent: Tuesday, April 22, 2014 8:10 PM
To: 'users@kafka.apache.org'
Subject: RE: commitOffsets by partition 0.8-beta
I'm
From: Seshadri, Balaji
Sent: Friday, April 18, 2014 11:50 AM
To: 'users@kafka.apache.org'
Subject: RE: commitOffsets by partition 0.8-beta
Thanks Jun.
-Original Message-
From: Jun Rao [mailto:jun...@gmail.com]
Sent: Friday, April 18, 2014 11:37 AM
To: users@kafka.apache.
Fri, Apr 18, 2014 at 1:15 PM, Seshadri, Balaji
wrote:
> The controller not failing over which I feel we got it resolved.
>
> The other fix is ZK node not getting deleted when preferred replica
> election is triggered.
>
> https://issues.apache.org/jira/browse/KAFKA-1365
>
&g
, April 18, 2014 2:04 PM
To: users@kafka.apache.org
Subject: Re: KAFKA-717
Hi Balaji,
What issues do you have doing the upgrade?
On Fri, Apr 18, 2014 at 10:25 AM, Seshadri, Balaji wrote:
> Hi Jun,
>
> We could not move to 0.8.1 because of issues we have in upgrade.
>
> We are sti
On Fri, Apr 18, 2014 at 10:02 AM, Seshadri, Balaji wrote:
> Hi,
>
> We have use case in DISH where we need to stop the consumer when we
> have issues in proceeding further to database or another back end.
>
> We update offset manually for each consumed message. There are 4
&
and we are not patching it any more. You probably can try
0.8.0 or wait until 0.8.1.1 is out.
Thanks,
Jun
On Fri, Apr 18, 2014 at 8:26 AM, Seshadri, Balaji
wrote:
> I'm trying to apply the patch from KAFKA-717 for 0.8.0-BETA candidate
> and it fails.
>
> Error:
>
&g
Hi,
We have use case in DISH where we need to stop the consumer when we have issues
in proceeding further to database or another back end.
We update offset manually for each consumed message. There are 4 threads(e.g)
consuming from same connector and when one thread commits the offset there is
I'm trying to apply the patch from KAFKA-717 for 0.8.0-BETA candidate and it
fails.
Error:
Patch failed:project/Build.scala
Project/Build.scala patch does not apply.
Please let me know if you guys have how to do it.
Thanks,
Balaji
: Controller is not being failed over 0.8.1
What's the controller value in the zk path (see
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper)?
Any error in the controller/state-change log?
Thanks,
Jun
On Wed, Apr 16, 2014 at 10:07 AM, Seshadri, Balaji wrote:
Hi,
We got the following error spamming the logs when broker 1 is the controller
and we are shutting it down in controlled manner not kill -9.
The leader being switched to broker 2 for all partitions but controller is not
being failed over to broker 2.
[2014-04-16 10:48:47.976-0600] ERROR [Con
https://issues.apache.org/jira/browse/KAFKA-1365
:)
-Original Message-
From: Bello, Bob [mailto:bob.be...@dish.com]
Sent: Tuesday, April 15, 2014 10:00 AM
To: users@kafka.apache.org
Cc: Bello, Bob
Subject: RE: Kafka upgrade 0.8.0 to 0.8.1 - kafka-preferred-replica-election
failure
I pe
@kafka.apache.org
Subject: Re: Issue with Upgrade of 0.8.1
You may be hitting https://issues.apache.org/jira/browse/KAFKA-1382
Could you check if you have any long GCs on the server side and session
timeouts from the Zookeeper log?
Guozhang
On Fri, Apr 11, 2014 at 3:10 PM, Seshadri, Balaji
wrote
Hi,
Currently we are using 0.8.1 version of Kafka ?.
Does this ConsumerConenctor.commitOffset method commits all partitions for
messages in iterator or only the offset of message that is currently consumed.
Please clarify as we are manually committing offsets and this would create
bigger issue
at
kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
From: Seshadri, Balaji [mailto:balaji.sesha...@dish.com]
Sent: Friday, April 11, 2014 4:00 PM
To: 'users@kafka.apache.org'
Subject: RE: I
Thread Dump attached.
From: Seshadri, Balaji
Sent: Friday, April 11, 2014 3:36 PM
To: 'users@kafka.apache.org'
Subject: Issue with Upgrade of 0.8.1
Hi,
We upgraded to 0.8.1 version of Kafka in TEST,we did load test shutting down 1
broker in the cluster,we are getting below error a
Please find thread dump attached.
-Original Message-
From: Seshadri, Balaji [mailto:balaji.sesha...@dish.com]
Sent: Friday, April 11, 2014 3:36 PM
To: 'users@kafka.apache.org'
Subject: Issue with Upgrade of 0.8.1
Hi,
We upgraded to 0.8.1 version of Kafka in TEST,we did
Hi,
We upgraded to 0.8.1 version of Kafka in TEST,we did load test shutting down 1
broker in the cluster,we are getting below error and cluster becomes
unresponsive.
Do you guys have any fix for this issue ?.
[2014-04-11 15:10:42.595-0600] ERROR Conditional update of path
/brokers/topics/rain
Are you committing offsets manually after you consume as you mentioned earlier
that "auto.commit.offset" is false.
-Original Message-
From: Arjun Kota [mailto:ar...@socialtwist.com]
Sent: Friday, April 11, 2014 10:56 AM
To: users@kafka.apache.org
Subject: Re: consumer not consuming messa
]
Sent: Tuesday, April 08, 2014 1:00 PM
To: Seshadri, Balaji; users@kafka.apache.org
Subject: RE: Single thread, Multiple partitions
Ah, thanks, figured it out now.
What kind of bottlenecks should I expect to run into if I'm looking at 10s of
1000s of partitions for a topic? The amount of
: Tuesday, April 08, 2014 1:30 PM
To: Seshadri, Balaji; users@kafka.apache.org
Subject: RE: Single thread, Multiple partitions
Ah, thanks. The intent of my question though was to better understand how a
large number of partitions affects Kafka itself.
- Original Message -
From: balaji.sesha
I think you are looking for accessing messages from set of partitions by your
own policy.You should use simple consumers in 0.8 and maintain the offsets you
have read.
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
If it is 0.9 I'm yet to come upto speed.
Thanks
.
The ConsumerOffsetChecker has a main method and prints the information on
console, I want to capture the data and push it to a monitoring server in a
cleaner way other than capturing ConsumerOffsetChecker console output.
Regards,
Harsh
On Fri, Mar 28, 2014 at 1:41 PM, Seshadri, Balaji
wrote:
>
This is Scala API should be possible to call it from Groovy.
-Original Message-
From: Seshadri, Balaji [mailto:balaji.sesha...@dish.com]
Sent: Friday, March 28, 2014 1:57 PM
To: 'users@kafka.apache.org'
Subject: RE: Java API to monitor Consumer Offs
kafka.tools.ConsumerOffsetChecker
-Original Message-
From: Harshvardhan Chauhan [mailto:ha...@gumgum.com]
Sent: Friday, March 28, 2014 12:54 PM
To: users@kafka.apache.org
Subject: Java API to monitor Consumer Offset and Lag
Hi,
I am trying to write a groovy script to get consumer offset
only includes Server but this bug
actually applies to Consumers.
Which version are you using?
Guozhang
On Sun, Jan 12, 2014 at 8:00 AM, Seshadri, Balaji
wrote:
> Please see log.
>
> seshbal:tm1mwdpl03-/apps/tc/tm1-deshdpconsumer101/apache-tomcat-7.0.42
> /logs
> >ls -1 catalina
umerConnector$ZKSessionExpireListener@376807ed]
-Original Message-
From: Seshadri, Balaji [mailto:balaji.sesha...@dish.com]
Sent: Sunday, January 12, 2014 8:51 AM
To: 'users@kafka.apache.org'
Subject: RE: Looks like consumer fetchers get stopped we are not getting any
data
I do se
I do see ZkClient expired when this really happened.
-Original Message-
From: Seshadri, Balaji [mailto:balaji.sesha...@dish.com]
Sent: Saturday, January 11, 2014 10:31 PM
To: users@kafka.apache.org
Subject: RE: Looks like consumer fetchers get stopped we are not getting any
data
data
>From the logs it seems the consumer's ZK registry log has lost, while
KAFKA-693 is mainly due to server side issue. Could you check if there is a
session timeout from the consumer on ZK log.
Guozhang
On Sat, Jan 11, 2014 at 2:33 PM, Seshadri, Balaji
wrote:
> We found the b
On Jan 10, 2014, at 10:11 PM, Jun Rao wrote:
>
> Have you looked at our FAQ, especially
> https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whyaretheremanyrebalancesinmyconsumerlog
> ?
>
> Thanks,
>
> Jun
>
>
> On Fri, Jan 10, 2014 at 2:25 PM, Seshadri, Ba
Any clue would be helpful.
-Original Message-
From: Seshadri, Balaji [mailto:balaji.sesha...@dish.com]
Sent: Friday, January 10, 2014 12:46 PM
To: users@kafka.apache.org
Subject: RE: Looks like consumer fetchers get stopped we are not getting any
data
Yes rebalance begins and exceptions
> thread, which tells the "executor_watcher" thread to shutdown the
>> fetchers, that would be another reason the consumers stop processing data.
>> Is this possible?
>>
>> Thank you,
>> rob
>>
>> -Original Message-
>> From: Seshad
It would be helpful if you guys can shed some light why all fetchers are
getting stopped.
-Original Message-
From: Seshadri, Balaji [mailto:balaji.sesha...@dish.com]
Sent: Friday, January 10, 2014 11:28 AM
To: users@kafka.apache.org
Subject: RE: Looks like consumer fetchers get stopped
Message-
From: Seshadri, Balaji [mailto:balaji.sesha...@dish.com]
Sent: Friday, January 10, 2014 10:52 AM
To: users@kafka.apache.org
Subject: Looks like consumer fetchers get stopped we are not getting any data
Please let us know why we are not getting any data from Kafaka after this log
from
Please let us know why we are not getting any data from Kafaka after this log
from Kafka,can you guys lets us know.
What could be causing all fetchers associated to be stooped why it is not doing
retry.
{2014-01-10 00:58:09,284} WARN
[account-info-updated-hadoop-consumer_tm1mwdpl04-1389222553
We also have just account-access-consumer TDA.
From: Withers, Robert
Sent: Saturday, January 04, 2014 2:56 PM
To: Seshadri, Balaji; Nanjegowda, Mithunraj; ShankenpurMayanna, Diwakar; Gulia,
Vikram
Subject: RE: DES Restart TEST
Here's the stripped logs with just account-access threads (3 of
consuming.
Could you share the logs of both consumers - account-access-
hadoop-consumer_tm1mwdpl04 and account-access-
hadoop-consumer_tm1mwdpl03-1383065261413-15d3cb41-0
Also, take thread dumps on both consumer processes and share that.
Thanks,
Neha
On Sat, Jan 4, 2014 at 9:56 AM, Seshadri, Balaji
Consumer offset checker shows its connected to
consumer(account-access-hadoop-consumer_tm1mwdpl03-1383065261413-15d3cb41-0)
but that consumer is not started,what could be reason its showing its connected.
Another question is why is it not getting rebalanced to another
consumer(account-access-ha
Any update on this guys ?.
-Original Message-
From: Seshadri, Balaji
Sent: Saturday, December 14, 2013 4:22 PM
To: users@kafka.apache.org
Subject: RE: Unable to start consumers in Tomcat
We are doing one scala consumer and one java consumer who listen on same topic
with different group
der, Principal Consultant
Big Data Open Source Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
****/
On Fri, Dec 13, 2013 at 6:29 PM, Seshadri, Balaji
wrote:
> Cant we create message stre
, Principal Consultant
Big Data Open Source Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
/
On Fri, Dec 13, 2013 at 6:17 PM, Seshadri, Balaji
wrote:
> We needed HTTP interface to
.
/***
Joe Stein
Founder, Principal Consultant
Big Data Open Source Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
/
On Fri, Dec 13, 2013 at 5:42 PM, Seshadri, Balaji
0.8
-Original Message-
From: Neha Narkhede [mailto:neha.narkh...@gmail.com]
Sent: Friday, December 13, 2013 3:33 PM
To: users@kafka.apache.org
Subject: Re: Unable to start consumers in Tomcat
Which version of kafka are you using?
On Fri, Dec 13, 2013 at 2:29 PM, Seshadri, Balaji
Any idea on this error guys ?.
-Original Message-
From: Seshadri, Balaji [mailto:balaji.sesha...@dish.com]
Sent: Friday, December 13, 2013 9:32 AM
To: 'users@kafka.apache.org'
Subject: Unable to start consumers in Tomcat
Hi ,
Can you guys lets us know why are we getting this
Hi ,
Can you guys lets us know why are we getting this error when try to spawn a
consumer ?.
ZookeeperConsumerConnector can create message streams at most once
Balaji
We also suspect this is because of KAFKA-914 bug which was not part of our
client build.
So we are rebuilding with latest Kafka Source.
-Original Message-
From: Seshadri, Balaji
Sent: Wednesday, August 14, 2013 10:53 PM
To: users@kafka.apache.org
Subject: RE: Blocking on Consumer
We see WAITING state in all our thread dumps.
We did see data in Kafka but consumer is not able to consume.
-Original Message-
From: Seshadri, Balaji [mailto:balaji.sesha...@dish.com]
Sent: Wednesday, August 14, 2013 8:29 PM
To: users@kafka.apache.org
Subject: RE: Blocking on Consumer
re details
and paste your threaddump?
Thanks,
Joel
On Wed, Aug 14, 2013 at 1:50 PM, Seshadri, Balaji
wrote:
> We are getting hanged on consumer side when we try to consume data
> using Scala API.
>
>
> We found it when we did load test.Attaching thread dump,please let us
We are getting hanged on consumer side when we try to consume data using Scala
API.
We found it when we did load test.Attaching thread dump,please let us know if
we have any fix.
Thanks,
Balaji
Can you guys help me on this ?.Getting below error on High Level consumer.But
topic does exist.
{2013-07-31 11:52:57,176} WARN
[UNITTEST_MERD7-181710-1375293172238-5f1b0917-leader-finder-thread]
(Logging.scala:88) -
[UNITTEST_MERD7-181710-1375293172238-5f1b0917-leader-finder-thread], Failed t
Try ./sbt "++2.8.0 package".
-Original Message-
From: Rob Withers [mailto:reefed...@gmail.com]
Sent: Thursday, June 06, 2013 11:54 AM
To: kafka list
Subject: package error
I am quite unfamiliar with compiling with sbt package. What could be my issue
here? It seems like the scala libra
Hi Neha,
Is moving to zookeeper 3.4.x is a big change ?.
Can you please explain parts it affects consumer API for example ?.
Thanks,
Balaji
-Original Message-
From: Neha Narkhede [mailto:neha.narkh...@gmail.com]
Sent: Friday, May 17, 2013 7:35 AM
To: users@kafka.apache.org
Subject: RE:
host string is null or empty. Can you paste the
code you used to instantiate the consumer?
Thanks,
Neha
On May 16, 2013 12:45 PM, "Seshadri, Balaji"
wrote:
> Hi,
>
> I'm trying to write webMethods consumer for Kafka and I get the following
> er
Hi,
I'm trying to write webMethods consumer for Kafka and I get the following error
when I try to integrate.
java.lang.NullPointerException
at org.apache.zookeeper.ClientCnxn.(ClientCnxn.java:360)
at org.apache.zookeeper.ClientCnxn.(ClientCnxn.java:331)
83 matches
Mail list logo