@Robert Quinlivan: the producer is just the kafka-console-producer
shell that comes in the kafka/bin directory (kafka/bin/windows in my
case). Nothing special.
I'll try messing with acks because this problem is somewhat incidental
to what I'm trying to do which is see how big the log directory grow
Just to point to you all, I also get similar exception in my streams
application when producer is trying to commit something to changelog topic.
Error sending record to topic test-stream-key-table-changelog
org.apache.kafka.common.errors.TimeoutException: Batch containing 2 record(s)
expired due t
> bootstrap.servers = ,
Is your bootstrap.servers configuration is correct ? You have specified
port `9091`, but running the GetOffsetShell command on `9094`
On Wed, Apr 19, 2017 at 11:58 AM, Ranjith Anbazhakan <
ranjith.anbazha...@aspiresys.com> wrote:
> Unfortunately, there is no specifi
Sorry about that. That was a typo.
The exact configuration is as below:
bootstrap.servers = ,
Thanks,
Ranjith
-Original Message-
From: kamaltar...@gmail.com [mailto:kamaltar...@gmail.com] On Behalf Of Kamal C
Sent: Wednesday, April 19, 2017 16:25
To: users@kafka.apache.org
Subject: Re:
While we were testing, our producer had following configuration
max.in.flight.requests.per.connection=1, acks= all and retries=3.
The entire producer side set is below. The consumer has manual offset commit,
it commit offset after it has successfully processed the message.
Producer setting
boot
Hi There
I would like to subscribe to this mailing list and know more about kafka.
Please add me to the list. Thanks in advance
Thanks
Arunkumar Pichaimuthu, PMP
Arun ,
send e-mail to users-subscr...@kafka.apache.org
Thanks,
Prahalad
On Wed, Apr 19, 2017 at 8:24 PM, Arunkumar
wrote:
>
> Hi There
> I would like to subscribe to this mailing list and know more about kafka.
> Please add me to the list. Thanks in advance
>
> Thanks
> Arunkumar Pichaimuthu,
If this is what I think it is, it has nothing to do with acks,
max.in.flight.requests.per.connection, or anything client-side and is
purely about the kafka cluster.
Here's a simple example involving a single zookeeper instance, 3 brokers, a
KafkaConsumer and KafkaProducer (neither of these clients
The OP was asking about duplicate messages, not lost messages, so I think
we are discussing two different possible scenarios. When ever someone says
they see duplicate messages it's always good practice to first double check
ack mode, in flight messages, and retries. Also its important to check if
Hi, Shri,
As Onur explained, if ZK is down, Kafka can still work, but won't be able
to react to actual broker failures until ZK is up again. So if a broker is
down in that window, some of the partitions may not be ready for read or
write.
As for the duplicates in the consumer, Hans had a good poi
*As Onur explained, if ZK is down, Kafka can still work, but won't be able
to react to actual broker failures until ZK is up again. So if a broker is
down in that window, some of the partitions may not be ready for read or
write.*
We had a production scenario where ZK had a long GC pause and Kafka
Oops, I linked to the wrong ticket, this is the one we hit:
https://issues.apache.org/jira/browse/KAFKA-3042
On Wed, Apr 19, 2017 at 1:45 PM, Jeff Widman wrote:
>
>
>
>
>
> *As Onur explained, if ZK is down, Kafka can still work, but won't be able
> to react to actual broker failures until ZK is
Thanks Jeff, Onur, Jun, Hans. I am learning a lot from your response.
Just to summarize briefly my steps, 5 node Kafka and ZK cluster.
1. ZK cluster has all node working. Consumer is down.
2. Bring down majority of ZK nodes.
3. Thing are functional no issue (no dup or lost message)
4. Now first ka
Just to add, I see below behavior repeat with even command line console
producer and consumer that come with Kafka.
Thanks,
Shri
__
Shrikant Patel | 817.367.4302
Enterprise Architecture Team
PDX-NHIN
-Original Message-
From: Shrikant Pat
The kafka-console-producer.sh defaults to acks=1 so just be careful with
using those tools for too much debugging. Your output is helpful though.
https://github.com/apache/kafka/blob/5a2fcdd6d480e9f003cc49a59d5952ba4c515a71/core/src/main/scala/kafka/tools/ConsoleProducer.scala#L185
-hans
On Wed,
Hi
Because I love this project, so I want to take part of it. But I'm brand
new to opensource project.
How can I get started to make contribution? Can you give me some advise or
something?
By the way, I already have JIRA account which called "james.c"
Sincerely,
James.C
Hi James,
This page has all the information you are looking for.
https://kafka.apache.org/contributing
On Thu, Apr 20, 2017 at 9:32 AM, James Chain
wrote:
> Hi
> Because I love this project, so I want to take part of it. But I'm brand
> new to opensource project.
>
> How can I get started to ma
Hello,
Yesterday, I had to replace a faulty Kafka broker node, and the method of
replacement involved bringing up a blank replacement using the old broker's
ID, thus triggering a replication of all its old partitions.
Today I was dealing with disk usage alerts for only that broker: it turned
out
AFAIK, this behavior is changed in 0.10.1.0 release. Now retention is based
on the largest
timestamp of the messages in a log segment.
On Thu, Apr 20, 2017 at 11:19 AM, Gwilym Evans wrote:
> Hello,
>
> Yesterday, I had to replace a faulty Kafka broker node, and the method of
> replacement involv
I am running 0.10.1.0 so, if that's true, it might not be a default. If you
know of a config value to change that would be very helpful.
-Gwilym
On 20 April 2017 at 06:07, Manikumar wrote:
> AFAIK, this behavior is changed in 0.10.1.0 release. Now retention is based
> on the largest
> timestamp
You may be producing in the old message format. Check the
"log.message.format.version" config.
What is the version of the Producer/Consumer clients?
On Thu, Apr 20, 2017 at 11:39 AM, Gwilym Evans wrote:
> I am running 0.10.1.0 so, if that's true, it might not be a default. If you
> know of a c
inter.broker.protocol.version = 0.10.1-IV2
log.message.format.version = 0.10.1-IV2
It will take me longer to check the producer/consumer versions, but I
believe they're all *at least* 0.10
-Gwilym
On 20 April 2017 at 06:42, Manikumar wrote:
> You may be producing in the old message format. C
22 matches
Mail list logo