Hi,
In Kafka source code, kafka.utils.TestUtils is defined in
src/core/src/test/scala/unit/kafka/utils/TestUtils.scala, but the
kafka_2.8.0-0.8.0-beta1.jar doesn't include kafka.utils.TestUtils. I'm
wondering how to write java unit test using kafka.utils.TestUtils? Should I use
some other jar
Hi Jun,
Thank you for your reply. I'm still a little fuzzy on the concept.
Are you saying I can have topic A, B and C and with
log.retention.bytes.per.topic.A = 15MB
log.retention.bytes.per.topic.B = 20MB
log.retention.bytes = 30MB
And thus topic C will get the value 30MB? Since it's not define
You raised a good point. The same is true for all the tools that are
under src/core/src/test/scala/other/kafka.
I think we need to fix the packaging under src/core/test. Probably move the
tools under core/src/main/scala/kafka/tools so that it goes in the main
Kafka jar. And either include the test
These are really tests, not something we should necessarily ship. They live
in the kafka-test jar, which I think is right. You can build them by
running
./sbt test:package
I would prefer adding this to the main package target rather than moving
our test code into the main source directory.
-Jay
Hi – Has anyone figured out a clean way to ignore/exclude the "simple" slf4j
bindings that get included in the kafka-assembly jar for 0.8? I would like all
of the libaries in my app to log through log4j but for those libraries using
slf4j, the "simple" bindings in the kafka-assembly jar are gett
I have a cluster of 3 kafka servers. Replication factor is 3. Two out of 3
servers were shutdown and traffic was sent to only one server that was up.
I brought second host up and it says according to logs that server has
started.
I ran ./kafka-list-topic.sh --zookeeper Still was showing leaders
a
What is preferred method for control shutdown using admin tool or setting
as flag "controlled.shutdown.enable" to true? What is the advantage of
using one verses the other?
Thanks,
Vadim
On Sun, Aug 18, 2013 at 11:05 PM, Vadim Keylis wrote:
> thanks so much. Greatly appreciated.
>
>
> On Sun, A
I think the error message can be improved to at least print which
partitions it couldn't move the leader for. What could be happening is that
the 2 brokers that were down might not have entered the ISR yet. So the
tool will not be able to move any leaders to them. You can run
kafka-list-topics with
It does print partitions. I just did not include them in the bug.
How can I monitor replica resync progress as well as know when resync
process completed using script? That should allow me to better predict when
the tool would run successfully.
Thanks so much.
On Mon, Aug 19, 2013 at 12:59 PM,
Thanks Jay and Neha. I built kafka_2.8.0-0.8.0-beta1-test.jar, which contains
the TestUtils class. Is there any sample java code for writing kafka test cases?
Regards,
Jiang
-Original Message-
From: Jay Kreps [mailto:jay.kr...@gmail.com]
Sent: Monday, August 19, 2013 12:33 PM
To: users@k
You can monitor the under replicated partition count through the
"kafka.server.UnderReplicatedPartitions" jmx bean on every leader. Another
way, which is heavy weight is to run kafka-list-topics, but I would
recommend running that only for diagnostic purposes, not for monitoring.
Thanks,
Neha
On
You can find simple producer/consumer demo code under the examples/ sub
project. For unit test cases, you can look under
core/src/test/scala/unit/kafka.
Thanks,
Neha
On Mon, Aug 19, 2013 at 1:53 PM, Wu, Jiang2 wrote:
> Thanks Jay and Neha. I built kafka_2.8.0-0.8.0-beta1-test.jar, which
> cont
Thanks Neha, it's really helpful.
Jiang
-Original Message-
From: Neha Narkhede [mailto:neha.narkh...@gmail.com]
Sent: Monday, August 19, 2013 5:44 PM
To: users@kafka.apache.org
Subject: Re: kafka_2.8.0-0.8.0-beta1.jar doesn't include kafka.utils.TestUtils
You can find simple producer/con
Jun, I read that FAQ entry you linked, but I am not seeing any Zookeeper
connection loss in the logs. It's rebalancing multiple times per minute,
though. Any idea what else could cause this? We're running kafka 0.7.2 on
approx 400 consumers against a topic with 400 partitions * 3 brokers.
--
For the first question, yes, topic C will get the value of 30MB.
For the second question, log.retention.bytes only controls the segment log
file size, not the index. Typically, index file size is much smaller than
the log file. The index file of the last (active) segment is presized to
the max ind
In our binary release, you can just remove slf4j-simple-1.6.4.jar and
add a slf4j-log4j12
jar.
Thanks,
Jun
On Mon, Aug 19, 2013 at 10:13 AM, Paul Mackles wrote:
> Hi – Has anyone figured out a clean way to ignore/exclude the "simple"
> slf4j bindings that get included in the kafka-assembly ja
We also have a jmx bean that tracks the lag in messages per partition in
the follower broker.
Thanks,
Jun
On Mon, Aug 19, 2013 at 1:07 PM, Vadim Keylis wrote:
> It does print partitions. I just did not include them in the bug.
>
> How can I monitor replica resync progress as well as know when
Any failure/restart of a consumer or a broker can also trigger a rebalance.
Thanks,
Jun
On Mon, Aug 19, 2013 at 6:00 PM, Ian Friedman wrote:
> Jun, I read that FAQ entry you linked, but I am not seeing any Zookeeper
> connection loss in the logs. It's rebalancing multiple times per minute,
>
It depends on how much flexibility you need during the controlled shutdown
and whether you have remote jmx operations enabled in your production Kafka
cluster. The jmx controlled shutdown method will offer more flexibility as
your script will have the retry logic, you don't need to make config
chan
Paul,
I'm trying to understand the 2nd problem you reported. Are you saying that
you set the log.retention.bytes=11534336 (11MB) but nevertheless your log
grew to 114MB. Which means the config option didn't really work as expected?
Thanks,
Neha
On Mon, Aug 19, 2013 at 8:46 PM, Jun Rao wrote:
Neha. Thanks so much for explaining. That leaves only one open question.
How do you validate that shutdown was successful if you do not have remote
jmx access unless besides setting timeout reasonable high?
Thanks so much again,
Vadim
On Mon, Aug 19, 2013 at 9:11 PM, Neha Narkhede wrote:
> It
Not that I am some expert on this subject, but I can see that broker logs
indicate the shutdown progress:
https://github.com/tejasapatil/kafka/blob/0.8.0-beta1-candidate1/core/src/main/scala/kafka/server/KafkaServer.scala#L165
On Mon, Aug 19, 2013 at 10:19 PM, Vadim Keylis wrote:
> Neha. Thanks
Tejas. I saw that too, but was hoping to avoid old grandpa approach:):).
That will work as well
On Mon, Aug 19, 2013 at 10:41 PM, Tejas Patil wrote:
> Not that I am some expert on this subject, but I can see that broker logs
> indicate the shutdown progress:
>
> https://github.com/tejasapatil/ka
Hi,
This is probably a very obvious questoin, but I cannot find the answer for
this.
What does the correlation id mean in a producer request?
Tim
That's not it either. I just had all my consumers shut down on me with this:
INFO 21:51:13,948 () ZkUtils$ - conflict in
/consumers/flurry1/owners/dataLogPaths/1-183 data:
flurry1_hs1030-1376964634130-dcc9192a-0 stored data:
flurry1_hs1061-1376964609207-4b7f348b-0
INFO 21:51:13,948 () Zooke
Sorry, ignore that first exception, I believe that was caused by an actual
manual shutdown. The NoNode exception though, has been popping up a lot, and I
am not sure if it's relevant, but it seems to show up a bunch when the
consumers decide it's time to rebalance continuously.
--
Ian Fried
Multiple produce requests are sent asynchronously over the same socket.
Suppose you send 2 requests and get back single response, how do you figure
out which one it corresponds to of those 2 requests ? Correlation Id helps
here.
AFAIK, correlation Id is added to produce requests and broker uses the
Hi,
i am having problems using kafka as a dependency in sbt.
with this simple build.sbt:
name := "kafka-dependency-test"
scalaVersion := "2.9.2"
libraryDependencies += "org.apache.kafka" % "kafka_2.9.2" % "0.8.0-beta1"
when i do
sbt update
i get the following error:
sbt.ResolveExcept
28 matches
Mail list logo