Are your topics dynamically created? If so, see this
threadhttps://www.mail-archive.com/dev@kafka.apache.org/msg67224.html
-Jaikiran
On 29/05/18 5:21 PM, Shantanu Deshmukh wrote:
Hello,
We have 3 broker Kafka 0.10.0.1 cluster. We have 5 topics, each with 10
partitions. I have an application
It's quite possible that the bootstrap server being used in your test
case is different (since you pull it out of some "details") from the one
being used in the standalone Java program. I don't mean the IP address
(since the logs do indicate it is localhost), but I think it might be
the port. P
There's kafka-producer-perf-test.sh and kafka-consumer-perf-test.sh
scripts that are shipped as part of the Kafka binary. You can setup your
setup with SSL and then try and run it against them to get some numbers
of your own.
-Jaikiran
On 30/04/18 4:58 PM, M. Manna wrote:
Hello,
We wanted
fka client
MUST have certificate, and truststore setup and can read only if ACLs are
programmed for that topic.
Any idea if such a thing exists ?
Thanks.
On Tue, Dec 19, 2017 at 10:10 PM, Jaikiran Pai
wrote:
What exact issue are you running into with thta configs?
-Jaikiran
On 20/12/
What exact issue are you running into with thta configs?
-Jaikiran
On 20/12/17 7:24 AM, Darshan wrote:
Anyone ?
On Mon, Dec 18, 2017 at 7:25 AM, Darshan
wrote:
Hi
I am wondering if there is a way to run the SSL and PLAINTEXT mode
together ? I am running Kafka 10.2.1. We want our internal
Can you show us some snippet of code where you are consuming this data?
Which language consumer are you using and how many consumers are part of
the (same) group? Which exact version of Kafka broker and which version
of the client side libraries?
-Jaikiran
On 14/11/17 6:01 PM, chandarasekara
Radu Radutiu wrote:
If you test with Java 9 please make sure to use an accelerated cipher suite
(e.g. one that uses AES GCM such as TLS_RSA_WITH_AES_128_GCM_SHA256).
Radu
On Mon, Oct 30, 2017 at 1:49 PM, Jaikiran Pai
wrote:
I haven't yet had a chance to try out Java 9, but that's def
ork email so please don't be confused.)
>
> The topic is already existing.
>
> -Original Message-----
> From: Jaikiran Pai [mailto:jai.forums2...@gmail.com ]
> Sent: Sunday, November 5, 2017 10:56 PM
> To: users@kafka.apache.org
> Subject: [EXTERNAL]Re: 0.9.0.0 Log4j
Is the topic to which the message is being produced, already present or
is it auto created?
-Jaikiran
On 05/11/17 3:43 PM, Dale wrote:
I am using the 0.9.0.0 log4j appender for Kafka because I have a lot of apps
dependent on log4j 1.2.x that cannot be upgraded to use newer versions of
log4j
Congratulations Kafka team on the release. Happy to see Kafka reach this
milestone. It has been a pleasure using Kafka and also interacting with
the Kafka team.
-Jaikiran
On 01/11/17 7:57 PM, Guozhang Wang wrote:
The Apache Kafka community is pleased to announce the release for Apache
Kafka
on, Oct 30, 2017 at 8:08 AM, Jaikiran Pai wrote:
We have been using Kafka in some of our projects for the past couple of
years. Our experience with Kafka and SSL had shown some performance issues
when we had seriously tested it (which admittedly was around a year back).
Our basic tests did show t
We have been using Kafka in some of our projects for the past couple of
years. Our experience with Kafka and SSL had shown some performance
issues when we had seriously tested it (which admittedly was around a
year back). Our basic tests did show that things had improved over time
with newer ve
I'm curious how these emails even get delivered to this (and sometimes
the dev list) if the user isn't yet subscribed (through the
users-subscribe mailing list)? Is this mailing list setup to accept
mails from unsubscribed users?
-Jaikiran
On 11/10/17 12:32 PM, Jakub Scholz wrote:
Out of cu
Can you post the exact log messages that you are seeing?
-Jaikiran
On 07/09/17 7:55 AM, Raghav wrote:
Hi
My Java code produces Kafka config overtime it does a send which makes log
very very verbose.
How can I reduce the Kafka client (producer) logging in my java code ?
Thanks for your help.
One thing that you might want to check is the number of consumers that
are connected/consuming against this Kafka setup. We have consistently
noticed that the CPU usage of the broker is very high even with very few
consumers (around 10 Java consumers). There's even a JIRA for it. From
what I re
https://issues.apache.org/jira/browse/KAFKA-4631 and the discussion in the
PR https://github.com/apache/kafka/pull/2622 for details.
Regards,
Rajini
On Thu, Mar 2, 2017 at 4:35 AM, Jaikiran Pai
wrote:
For future reference - I asked this question on dev mailing list and based
on the discussion there was ab
, Jaikiran Pai wrote:
We are on Kafka 0.10.0.1 (server and client) and use Java
consumer/producer APIs. We have an application where we create Kafka
topics dynamically (using the AdminUtils Java API) and then start
producing/consuming on those topics. The issue we frequently run into
is this:
1
We are on Kafka 0.10.0.1 (server and client) and use Java
consumer/producer APIs. We have an application where we create Kafka
topics dynamically (using the AdminUtils Java API) and then start
producing/consuming on those topics. The issue we frequently run into is
this:
1. Application proces
the loop to execute.
Last time we tried it, it was running for that file for over 2 hours and still
not finished.
Regards,
Varun
-Original Message-----
From: Jaikiran Pai [mailto:jai.forums2...@gmail.com]
Sent: 22 November 2016 02:20
To: users@kafka.apache.org
Subject: Re: Kafka producer d
The KafkaProducer.send returns a Future. What happens
when you add a future.get() on the returned Future, in that while loop,
for each sent record?
-Jaikiran
On Tuesday 22 November 2016 12:45 PM, Phadnis, Varun wrote:
Hello,
We have the following piece of code where we read lines from a file
Which exact version of Kafka installation and Kafka client is this? And
which language/library of Kafka client? Also, are you describing this
situation in the context of producing messages? Can you post your
relevant code from the application where you deal with this?
Connection management is
Tuesday 01 November 2016 07:39 PM, Jaikiran Pai wrote:
We are using Kafka 0.10.1.0 (server) and Java client API (the new API)
for consumers. One of the issues we have been running into is that the
consumer is considered "dead" by the co-ordinator because of the lack
of activity within a spe
IMO, it's a bug and it shouldn't be throwing NPEs. If this is
reproducible then maybe you can file a JIRA so that someone from the dev
team can take a look.
-Jaikiran
On Friday 21 October 2016 10:37 AM, Максим Гумеров wrote:
Hi! I see WARNs on kafka startup even if I only have a single empty
We are using Kafka 0.10.1.0 (server) and Java client API (the new API)
for consumers. One of the issues we have been running into is that the
consumer is considered "dead" by the co-ordinator because of the lack of
activity within a specific period of time. In reality, the consumer is
still ali
In addition to what Michael noted, this question has been asked a few
times before too and here's one such previous discussion
https://www.quora.com/What-is-the-actual-role-of-ZooKeeper-in-Kafka
-Jaikiran
On Wednesday 14 September 2016 03:50 AM, Michael Noll wrote:
Eric,
the latest versions
This is a known issue and is being tracked in this JIRA
https://issues.apache.org/jira/browse/KAFKA-3539
-Jaikiran
On Saturday 10 September 2016 12:20 AM, Peter Sinoros Szabo wrote:
Hi,
I'd like to use the Java Kafka producer in a non-blocking async mode.
My assuptions were that until the new
What does the output of:
lsof -p
show on that specific node?
-Jaikiran
On Monday 12 September 2016 10:03 PM, Michael Sparr wrote:
5-node Kafka cluster, bare metal, Ubuntu 14.04.x LTS with 64GB RAM, 8-core,
960GB SSD boxes and a single node in cluster is filling logs with the following:
[20
What does the output of:
lsof -p
show?
-Jaikiran
On Monday 12 September 2016 10:03 PM, Michael Sparr wrote:
5-node Kafka cluster, bare metal, Ubuntu 14.04.x LTS with 64GB RAM, 8-core,
960GB SSD boxes and a single node in cluster is filling logs with the following:
[2016-09-12 09:34:49,522]
esProp, 15 * 1024 * 1024: java.lang.Integer)
val config = LogConfig(logProps)
val cp = new File("/Users/gaurav/Downloads/corrupt/gaurav/kafka-logs/Topic3-12")
var log = new Log(cp, config, 0, time.scheduler, time
On Tue, Aug 30, 2016 at 11:37 AM, Jaikiran Pai
wrote:
Can you paste
Can you paste the entire exception stacktrace please?
-Jaikiran
On Tuesday 30 August 2016 11:23 AM, Gaurav Agarwal wrote:
Hi there, just wanted to bump up the thread one more time to check if
someone can point us in the right direction... This one was quite a serious
failure that took down many
Can you explain what exactly you mean by "cloud" and what kind of
restrictions you are running into in trying to point to the truststore
location?
-Jaikiran
On Friday 19 August 2016 08:09 PM, Nomar Morado wrote:
kafka consumer/producer currently require path to keystore/truststore.
my client
What's the heartbeat interval that you have set on these consumer
configs (if any)? Can you also paste a snippet of your code to show what
the consumer code looks like (including the poll and commit calls)?
-Jaikiran
On Tuesday 23 August 2016 07:55 PM, Franco Giacosa wrote:
Hi I am experie
Which Java vendor and version are you using in runtime? Also what OS is
this? Can you get the lsof output (on Linux) and paste the output of
that to some place (like gist) to show us what descriptors are open etc...
-Jaikiran
On Friday 26 August 2016 02:49 AM, Bharath Srinivasan wrote:
Hello:
Is anyone producing any (new) messages to the topics you are subscribing
to in that consumer?
-Jaikiran
On Friday 26 August 2016 10:14 AM, Jack Yang wrote:
Hi all,
I am using kafka 0.10.0.1, and I set up my listeners like:
listeners=PLAINTEXT://myhostName:9092
then I have one consumer going u
On Friday 12 August 2016 08:45 PM, Oleg Zhurakousky wrote:
It hangs indefinitely in any container.
I don't think that's accurate. We have been running Kafka brokers and
consumers/producers in docker containers for a while now and they are
functional. Of course, you need to make sure you use t
The quickstart step 2 has a couple of commands, which exact command
shows this exception and is there more in the exception, like an
exception stacktrace? Can you post that somehow?
-Jaikiran
On Monday 08 August 2016 12:46 PM, Sven Ott wrote:
Hello everyone,
I downloaded the latest Kafka ve
+1 for Java 8. Our eco-system which uses Kafka and many other open
source projects are now fully on Java 8 since a year or more.
-Jaikiran
On Friday 17 June 2016 02:15 AM, Ismael Juma wrote:
Hi all,
I would like to start a discussion on making Java 8 a minimum requirement
for Kafka's next feat
Adding the Kafka dev list to cc, hoping they would answer this question.
-Jaikiran
On Friday 10 June 2016 11:18 AM, Jaikiran Pai wrote:
We are using 0.9.0.1 of Kafka server and (Java) clients. Our (Java)
consumers are assigned to dynamic runtime generated groups i.e. the
consumer group name is
We are using 0.9.0.1 of Kafka server and (Java) clients. Our (Java)
consumers are assigned to dynamic runtime generated groups i.e. the
consumer group name is generated dynamically at runtime, using some
application specific logic. I have been looking at the docs but haven't
yet found anything
On Thursday 09 June 2016 08:00 PM, Patrick Kaufmann wrote:
Hello
Recently we’ve run into a problem when starting our application for the first
time.
At the moment all our topics are auto-created. Now, at the first start there
are no topics, so naturally some consumers try to connect to topic
How do you check/verify the duplication of the message? Can you post
relevant part of your producer code too?
-Jaikiran
On Thursday 09 June 2016 10:36 PM, Clark Breyman wrote:
We're seeing a situation in one of our clusters where a message will
occasionally be duplicated on an incorrect topic.
You can take a thread dump (using "jstack ") when
the program doesn't terminate and post that output here. That will tell
us which threads are causing the program to not terminate.
-Jaikiran
On Tuesday 17 May 2016 11:32 PM, Andy Davidson wrote:
I wrote a little test client that reads from a f
That's actually not the right way to delete topics (or for that matter
managing a Kafka instance). It can lead to odd/corrupt installation.
-Jaikiran
On Wednesday 11 May 2016 06:27 PM, Eduardo Costa Alfaia wrote:
Hi,
It’s better creating a script that delete the kafka folder where exist the
k
On Tuesday 10 May 2016 09:29 PM, Radoslaw Gruchalski wrote:
Kafka is expecting the state to be there when the zookeeper comes back. One way
to protect yourself from what you see happening, is to have a zookeeper quorum.
Run a cluster of 3 zookeepers, then repeat your exercise.
Kafka will conti
Going by the name of that property (max.partition.fetch.bytes), I'm
guessing it's the max fetch bytes per partition of a topic. Are you sure
the data you are receiving in that consumers doesn't belong to multiple
partitions and hence can/might exceed the value that's set per
partition? By the w
From what you pasted, I can't say for certain whether you are using
those properties as consumer level settings or broker level settings.
The group.min.session.timeout.ms and the group.max.session.timeout.ms
are broker level settings (as far as I understand) and should be part of
your broker co
Have you tried getting the memory usage output using tool like jmap and
seeing what's consuming the memory? Also, what are you heap sizes for
the process?
-Jaikiran
On Tuesday 19 April 2016 02:31 AM, McKoy, Nick wrote:
To follow up with my last email, I have been looking into
socket.receive.
We have had this issue in 0.8.x and at that time we did not investigate
it. Recently we upgraded to 0.9.0.1 and had similar issue which we
investigated and narrowed down to what's explained here
http://mail-archives.apache.org/mod_mbox/kafka-users/201604.mbox/%3C571F23ED.7050405%40gmail.com%3E.
here is the CPU usage and how fast you want to detect
the consumer failure. Faster failure detection makes the partitions
assigned to dead consumers to assign to other consumers.
Best,
Liquan
On Tue, Apr 26, 2016 at 1:16 AM, Jaikiran Pai
wrote:
We have been investigating an unreasonably high
We have been investigating an unreasonably high CPU usage of the Kafka
process when there's no _real_ activity going on between the consumers
and the broker. We had this issue in 0.8.x days and is exactly the same
as what's being tracked in this JIRA
https://issues.apache.org/jira/browse/KAFKA-
, Fangmin Lv, Flavio Junqueira, Flutra Osmani, Gabriel
Nicolas Avellaneda, Geoff Anderson, Grant Henke, Guozhang Wang, Gwen
Shapira, Honghai Chen, Ismael Juma, Ivan Lyutov, Ivan Simoneko,
Jaikiran Pai, James Oliver, Jarek Jarcec Cecho, Jason Gustafson, Jay
Kreps, Jean-Francois Im, Jeff Holoman,
It's been discussed here recently
http://mail-archives.apache.org/mod_mbox/kafka-dev/201509.mbox/%3CCAFc58G_dn_mMGaJoyiw81-RdAFJ2NAgxQFLtc%3D9pU5PwPW_Kvg%40mail.gmail.com%3E
-Jaikiran
On Monday 28 September 2015 11:08 PM, Richard Lee wrote:
It appears from maven central and git that there was a
Sending this to the dev list since the Kafka dev team might have more
inputs on this one. Can someone please take a look at the issue noted
below and whether the suggested change makes sense?
-Jaikiran
On Tuesday 15 September 2015 12:03 AM, Jaikiran Pai wrote:
We have been using Kafka for a
We have been using Kafka for a while now in one of dev projects.
Currently we have just 1 broker and 1 zookeeper instance. Almost every
day, Kafka "stalls" and we end up cleaning up the data/log folder of
Kafka and zookeeper and bring it up afresh. We haven't been able to
narrow down the issue
On Wednesday 12 August 2015 04:59 AM, venkatesh kavuluri wrote:
83799 [c3-onboard_-2-9571-1439334326956-cfa8b46a-leader-finder-thread]
INFO kafka.consumer.SimpleConsumer - Reconnect due to socket error:
java.net.SocketTimeoutException
163931 [c3-onboard_-2-9571-1439334326956-cfa8b46a-l
Such "errors" are very typical in zookeeper logs - it's very noisy. I
typically ignore those errors and try and debug the Kafka issue either
via Kafka logs, Kafka thread dumps and/or zookeeper shell.
Anyway, how are you adding the topics (script, code?) and what exactly
are you noticing? Runni
I am on Kafka 0.8.2.1 (Java 8) and have happened to run into this same
issue where the KafkaServer (broker) goes into a indefinite while loop
writing out this message:
[2015-08-04 15:45:12,350] INFO conflict in /brokers/ids/0 data:
{"jmx_port":-1,"timestamp":"1438661432074","host":"foo-bar",
Would it be possible to enhance the kafka-topics.sh script so that it
can show, against the topic it's listing, whether a particular topic is
marked for deletion? Right now, to figure out whether a topic has been
marked for deletion, one has to use the zookeeper-shell script and list
the topics
On Friday 17 July 2015 10:14 AM, Jiangjie Qin wrote:
I think the rough calculation of max memory footprint for each high level
consumer would be:
(Number Of Partitions For All Topics) * fetch.message.max.bytes *
queued.max.message.chunks + (some decompression memory cost for a message)
Is this
I can't see anything obvious wrong in those configs or the code (after
just a brief look). Are you sure the system on which you are running
Kafka has its date/time correctly set?
-Jaikiran
On Monday 29 June 2015 12:06 PM, Krzysztof Zarzycki wrote:
Greetings!
I have problem with Kafka. I had a
You probably have the wrong version of the Kafka jar(s) within your
classpath. Which version of Kafka are you using and how have you setup
the classpath?
-Jaikiran
On Thursday 18 June 2015 08:11 AM, Srividhya Anantharamakrishnan wrote:
Hi,
I am trying to set up Kafka in our cluster and I am r
One way to narrow down the issue is to attach a debugger to the Kafka
JVM and add a breakpoint in SimpleConsumer to see the real exception
stacktrace which is causing the reconnect. I've filed a JIRA with a
patch to improve this logging to include the entire cause stacktrace
while logging this
Hi Sanjay,
Did you check that no other Kafka process is using the /tmp/kafk-logs
folder? What command(s) did you use to verify that?
-Jaikiran
On Saturday 23 May 2015 12:19 PM, Sanjay Mistry wrote:
[2015-05-23 12:16:41,624] INFO Initiating client connection,
connectString=localhost:2181 sessi
One thing to remember is that the .index files are memory-mapped [1]
which in Java means that the file descriptors may not be released even
when the program is done using it. A garbage collection is expected to
close such resources, but forcing a System.gc() is only a hint and thus
doesn't guar
Ricardo,
What does your consumer code look like? If it's too big to have that
code inline in this mail, you can perhaps put it in a github repo or
even a gist.
-Jaikiran
On Monday 09 February 2015 12:54 AM, Ricardo Ferreira wrote:
Hi Gwen,
Sorry, both the consumer and the broker are 0.8.2?
>> java.io.IOException: Unable to create
/tmp/PerfTopic22_1/ProducerRequestSize.csv
It looks like a file with that exact same name already exists which is
causing that file creation request to fail. This indicates that probably
the metric name (ProducerRequestSize) from which the file is creat
On Monday 02 February 2015 11:03 PM, Jun Rao wrote:
Jaikiran,
The fix you provided in probably unnecessary. The channel that we use in
SimpleConsumer (BlockingChannel) is configured to be blocking. So even
though the read from the socket is in a loop, each read blocks if there is
no bytes receiv
Hi Mathias,
Looking at that thread dump, I think the potential culprit is this one:
TRACE 303545: (thread=200049)
sun.nio.ch.EPollArrayWrapper.epollWait(EPollArrayWrapper.java:Unknown line)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EP
I use a simple email client (Thunderbird) and have a filter setup so
that mails to the Kafka user mailing list are moved to a specific
folder. I then have "thread view" enabled so that the replies/discussion
shows up in the right context. I have the same for some other mailing
lists too and hav
onfiguration for the servers? In particular it would be
good to know the retention and/or log compaction settings as those delete
files.
-Jay
On Sun, Jan 25, 2015 at 4:34 AM, Jaikiran Pai
wrote:
Hi Yonghui,
Do you still have this happening? If yes, can you tell us a bit more
about your setup
n my local machine i
have reduced hard limit to 6000 and used 1000 threads to send message to
kafka (topic had 100 partition with 1 replication factor)
On Fri, Jan 30, 2015 at 2:14 PM, Jaikiran Pai
wrote:
Looking at that heap dump, this probably is a database connection/resource
leak (298 c
wrote:
I have shared object histogram after and before gc on gist
https://gist.github.com/ankit1987/f4a04a1350fdd609096d
On Fri, Jan 30, 2015 at 12:43 PM, Jaikiran Pai
wrote:
What kind of a (managed) component is that which has the @PreDestroy?
Looking at the previous snippet you added, it
ation.
On Fri, Jan 30, 2015 at 9:34 AM, Jaikiran Pai
wrote:
Which operating system are you on and what Java version? Depending on
the
OS, you could get tools (like lsof) to show which file descriptors are
being held on to. Is it the client JVM which ends up with these leaks?
Also, would i
Which operating system are you on and what Java version? Depending on
the OS, you could get tools (like lsof) to show which file descriptors
are being held on to. Is it the client JVM which ends up with these leaks?
Also, would it be possible to post a snippet of your application code
which
Hi Yonghui,
Do you still have this happening? If yes, can you tell us a bit more
about your setup? Is there something else that accesses or maybe
deleting these log files? For more context to this question, please read
the discussion related to this here
http://mail-archives.apache.org/mod_mb
Just had a quick look at this and it turns out the object name you are
passing is incorrect. I had to change it to:
./kafka-run-class.sh kafka.tools.JmxTool --object-name
'kafka.server:name=UnderReplicadPartitions,type=ReplicaManager'
--jmx-url service:jmx:rmi:///jndi/rmi://localhost:/jmxr
Hi Su,
How exactly did you start the Kafka server on instance "A"? Are you sure
the services on it are bound to non localhost IP? What does the
following command result from instance B:
telnet public.ip.of.A 9092
-Jaikiran
On Tuesday 20 January 2015 07:16 AM, Su She wrote:
Hello Everyone,
Hi Scott,
A quick look at the JmxTool code suggests that it probably isn't able to
find the attribute for that MBean, although that MBean does seem to have
1 attribute named Value (I used jconsole to check that). The output you
are seeing is merely the date (without any format) being printed o
I just downloaded the Kafka binary and am trying this on my 32 bit JVM
(Java 7)? Trying to start Zookeeper or Kafka server keeps failing with
"Unrecognized VM option 'UseCompressedOops'":
./zookeeper-server-start.sh ../config/zookeeper.properties
Unrecognized VM option 'UseCompressedOops'
Error
Create a JIRA for this https://issues.apache.org/jira/browse/KAFKA-1853
-Jaikiran
On Thursday 08 January 2015 01:18 PM, Jaikiran Pai wrote:
Apart from the fact that the file rename is failing (the API notes
that there are chances of the rename failing), it looks like the
implementation in
Apart from the fact that the file rename is failing (the API notes that
there are chances of the rename failing), it looks like the
implementation in FileMessageSet's rename can cause a couple of issues,
one of them being a leak.
The implementation looks like this
https://github.com/apache/ka
our system guys name it. How to
take a note to measure the connections open to 10.100.98.102?
Thanks
AL
On Jan 7, 2015 9:42 PM, "Jaikiran Pai" wrote:
On Thursday 08 January 2015 01:51 AM, Sa Li wrote:
see this type of error again, back to normal in few secs
[2015-01-07 20:19:49,74
Hi Sa,
Are you really sure "w2" is a real hostname, something that is
resolvable from the system where you are running this. The JSON output
you posted seems very close to the example from the jmxtrans project
page https://code.google.com/p/jmxtrans/wiki/GraphiteWriter, so I
suspect you aren'
On Thursday 08 January 2015 01:51 AM, Sa Li wrote:
see this type of error again, back to normal in few secs
[2015-01-07 20:19:49,744] WARN Error in I/O with harmful-jar.master/
10.100.98.102
That's a really weird hostname, the "harmful-jar.master". Is that really
your hostname? You mention th
84 matches
Mail list logo