Hi,
I'm testing kafka 0.8.0 failover.
I have 5 brokers 1,2,3,4,5. I shutdown 5 (with controlled shutdown
activated).
broker 4 is my bootstrap broker.
My config has: default.replication.factor=2, num.partitions=8.
When I look at the kafka server.log on broker 4 I get the below error,
which only
Hi,
I've finally fixed this by closing the connection on timeout and creating a
new connection on the next send.
Thanks,
Gerrit
On Tue, Jan 14, 2014 at 10:20 AM, Gerrit Jansen van Vuuren <
gerrit...@gmail.com> wrote:
> Hi,
>
> thanks I will do this.
>
>
>
> On Tue, Jan 14, 2014 at 9:51 AM, Jo
I've found the response to my own question:
http://mail-archives.apache.org/mod_mbox/kafka-users/201308.mbox/%3c44d1e1522419a14482f89ff4ce322ede25025...@brn1wnexmbx01.vcorp.ad.vrsn.com%3E
On Wed, Jan 29, 2014 at 1:17 PM, Gerrit Jansen van Vuuren <
gerrit...@gmail.com> wrote:
> Hi,
>
> I'm testi
Dear Kafka Users,
Is there any possibility for a producer to request commit ACK using Kafka 0.7?
The reason I'm considering Kafka 0.7 is integration with existing .Net
application(s).
So far I didn't find any Kafka 0.8 .Net client that works properly.
Thank you very much.
Best regards,
Janos
hello, I'm user in kafka
I want find commit method from consumer to zookeeper, but can't find it
I want to make structure like that,
hadoop-consumer: 1. it get message
2. it write message to hadoop hdfs
3. it have to get message again from what it read ,when it is fault and recover.
( for exa
Hmm, it's weird that EC2 only allows you to bind to local ip. Could some
EC2 users here help out?
Also, we recently added https://issues.apache.org/jira/browse/KAFKA-1092,
which allows one to use a different ip for binding and connecting. You can
see if this works for you. The patch is only in tru
If you are using the high level consumer, there is an commitOffsets() api.
If you are using SimpleConsumer, you are on your own for offset management.
The getOffsetBefore api is not for getting the consumer offsets, but for
getting the offset in the log before a particular time.
Thanks,
Jun
On
No, only 0.8 has ack for the producers.
Thanks,
Jun
On Wed, Jan 29, 2014 at 6:54 AM, Janos Mucza wrote:
> Dear Kafka Users,
>
> Is there any possibility for a producer to request commit ACK using Kafka
> 0.7?
>
> The reason I'm considering Kafka 0.7 is integration with existing .Net
> applica
Jay,
There is both a basic client object, and a number of IO processing threads.
The client object manages connections, creating new ones when new machines
are connected, or when existing connections die. It also manages a queue of
requests for each server. The IO processing thread has a selector,
Hey Tom,
So is there one connection and I/O thread per broker and a low-level client
for each of those, and then you hash into that to partition? Is it possible
to batch across partitions or only within a partition?
-Jay
On Wed, Jan 29, 2014 at 8:41 AM, Tom Brown wrote:
> Jay,
>
> There is bo
Regarding the use of Futures -
Agree that there are some downsides to using Futures but both approaches
have some tradeoffs.
- Standardization and usability
Future is a widely used and understood Java API and given that the
functionality that RecordSend hopes to provide is essentially that of
Fut
Hey Neha,
Error handling in RecordSend works as in Future you will get the exception
if there is one from any of the accessor methods or await().
The purpose of hasError was that you can write things slightly more simply
(which some people expressed preference for):
if(send.hasError())
// d
The challenge of directly exposing ProduceRequestResult is that the offset
provided is just the base offset and there is no way to know for a
particular message where it was in relation to that base offset because the
batching is transparent and non-deterministic. So I think we do need some
kind of
Is the new producer API going to maintain protocol compatibility with old
version if the API under the hood?
> On Jan 29, 2014, at 10:15, Jay Kreps wrote:
>
> The challenge of directly exposing ProduceRequestResult is that the offset
> provided is just the base offset and there is no way to kno
Jay,
I think you're confused between my use of "basic client" and "connection".
There is one basic client for a cluster. An IO thread manages the tcp
connections for any number of brokers. The basic client has a queue of
requests each broker. When a tcp connection (associated with broker X) is
rea
I strongly support the user of Future. In fact, the cancel method may not
be useless. Since the producer is meant to be used by N threads, it could
easily get overloaded such that a produce request could not be sent
immediately and had to be queued. In that case, cancelling should cause it
to not a
Yes, we will absolutely retain protocol compatibility with 0.8 though the
java api will change. The prototype code I posted works with 0.8.
-Jay
On Wed, Jan 29, 2014 at 10:19 AM, Steve Morin wrote:
> Is the new producer API going to maintain protocol compatibility with old
> version if the API
>> The challenge of directly exposing ProduceRequestResult is that the
offset
provided is just the base offset and there is no way to know for a
particular message where it was in relation to that base offset because the
batching is transparent and non-deterministic.
That's a good point. I need to
That is exactly what we are doing. But in the shutdown path we see the
following exception.
2013-12-19 23:21:54,249 FATAL [kafka.consumer.ZookeeperConsumerConnector]
(EFKafkaMessageFetcher-retryLevelThree)
[TestretryLevelThreeKafkaGroup_pshah-MacBook-Pro.local-1387494830478-efdbacea],
error du
Wrestling through the at-least/most-once semantics of my application and I
was hoping for some confirmation of the semantics. I'm not sure I can
classify the high level consumer as either type.
False ack scenario:
- Thread A: call next() on the ConsumerIterator, advancing the
PartitionTopicInfo o
Jay et al,
What are your current thoughts on ensuring that the next-generation APIs
play nicely with both lambdas and the extensions to the standard runtime in
Java 8?
My thoughts are that if folks are doing the work to reimplement/redesign
the API, it should be as compatible as possible with the
Hi Marc,
Having a different set of metadata stored on the brokers to feed metadata
requests from producers and consumers would be very tricky I think. For
your use case, one thing you could try is use a customized partitioning
function in the producer to only produce to a subset of the partitions
Hi Clark,
In practice, the client app code need to always commit offset after it has
processed the messages, and hence only the second case may happen, leading
to "at least once".
Guozhang
On Wed, Jan 29, 2014 at 11:51 AM, Clark Breyman wrote:
> Wrestling through the at-least/most-once semant
To avoid some consumers not consuming anything, one small trick might be
that if a consumer found itself not getting any partition, it can force a
rebalancing by deleting its own registration path and re-register in ZK.
On Mon, Jan 27, 2014 at 4:32 PM, David Birdsong wrote:
> On Mon, Jan 27, 201
Guozhang,
Thank make sense except for the following:
- the ZookeeperConsumerConnector.commitOffsets() method commits the current
value of PartitionTopicInfo.consumeOffset for all of the active streams.
- the ConsumerIterator in the streams advances the value of
PartitionTopicInfo.consumeOffset
Can you check why the consumer thread is interrupted during the shutdown at
all?
On Wed, Jan 29, 2014 at 11:38 AM, paresh shah wrote:
> That is exactly what we are doing. But in the shutdown path we see the
> following exception.
>
> 2013-12-19 23:21:54,249 FATAL [kafka.consumer.ZookeeperConsume
Hi,
I try to run Kafka performance tests on my hosts. I get this error message:
[User@Client1 kafka_2.8.0-0.8.0]$ ./bin/kafka-producer-perf-test.sh
Exception in thread "main" java.lang.NoClassDefFoundError:
kafka/perf/ProducerPerformance
Caused by: java.lang.ClassNotFoundException: kafka.perf.Pr
Download the source and build from source
wget https://archive.apache.org/dist/kafka/0.8.0/kafka-0.8.0-src.tgz
tar -xvf kafka-0.8.0-src.tgz
cd kafka-0.8.0-src.tgz
./sbt update
./sbt package
./sbt assembly-package-dependency
/***
Joe Stein
Founder, Princip
We are interrupting the thread that uses the consumer connector. The question I
had was if the LeaderFinderThread is interruptible then it is one that is
generating the exception due to the await() call( highlighted )
def shutdown(): Unit = {
info("Shutting down")
isRunning.set(false)
Hey Guys,
My 2c.
1. RecordSend is a confusing name to me. Shouldn't it be
RecordSendResponse?
2. Random nit: it's annoying to have the Javadoc info for the contstants
on
http://empathybox.com/kafka-javadoc/kafka/clients/producer/ProducerConfig.h
tml, but the string constant values on
http://empa
Hi Clark,
1. This is true, you need to synchronize these consumer threads when
calling commitOffsets();
2. If you are asking what if the consumer thread crashed after
currentTopicInfo.resetConsumeOffset(consumedOffset)
within the next() call, then on its startup, it will lose all these
in-memor
Hi,
We need a reliable low-latency message queue that can scale. Kafka looks like a
right system for this role.
I am running performance tests on multiple platforms: Linux and Windows. For
test purposes I create topics with 2 replicas and multiple partitions. In all
deployments running test pr
Guozhang,
Thanks. I'm thinking not of threads crashing but processes/vms/networks
disappearing. Lights-out design so that if any of the computers/routers
catch fire I can still sleep.
I think I can get what I want by spinning up N ZookeeperConsumerConnectors
on the topic each with one thread rath
Michael,
The producer client in 0.8 has a single thread that does blocking data
sends one broker at a time. However, we are working on a rewrite of the
producer with an improved design that can support higher throughput
compared to the current one. We don't have any performance numbers to share
ye
I'm trying to set a callback handler from java.
For that, I created a Callback Handler this way:
public class KafkaAsyncCallbackHandler implements CallbackHandler {
}
and set the property
callback.handler=com.lucid.kafka.KafkaAsyncCallbackHandler
But on runtime I get this exception coming fro
Which version of Kafka are you using? This doesn't seem to be the 0.8.0
release.
Thanks,
Jun
On Wed, Jan 29, 2014 at 11:38 AM, paresh shah wrote:
> That is exactly what we are doing. But in the shutdown path we see the
> following exception.
>
> 2013-12-19 23:21:54,249 FATAL [kafka.consumer.Zo
Normally you will just call shutdown on the connector. Is there a
particular reason that you need to interrupt it?
Thanks,
Jun
On Wed, Jan 29, 2014 at 2:07 PM, paresh shah wrote:
> We are interrupting the thread that uses the consumer connector. The
> question I had was if the LeaderFinderThre
Does the result change with just 1 partition?
Thanks,
Jun
On Wed, Jan 29, 2014 at 4:06 PM, Michael Popov wrote:
> Hi,
>
> We need a reliable low-latency message queue that can scale. Kafka looks
> like a right system for this role.
>
> I am running performance tests on multiple platforms: Lin
Is your producer instantiated with type byte[]?
Thanks,
Jun
On Wed, Jan 29, 2014 at 7:25 PM, Patricio Echagüe wrote:
> I'm trying to set a callback handler from java.
>
> For that, I created a Callback Handler this way:
>
> public class KafkaAsyncCallbackHandler implements CallbackHandler {
>
It's String. I also tried with the generic type String.
The CallbackHandler interface I implement is the one in
kafka.javaapi.producer.async package. Is that the right one?
I'm a bit confused because the exception mentions kafka.producer.async.
CallbackHandler.
On Wed, Jan 29, 2014 at 9:05 PM, J
40 matches
Mail list logo