Hi Team,
Thanks for providing comments.
Here adding more details on steps followed for upgrade,
Cluster details: We are using 4 node kafka cluster and topics with 3
replication factor. For upgrade test, we are using a topic with 5
partitions & 3 replication factor.
Topic:student-activity Parti
Looks like the screen shots didn't come through.
Consider pasting the text.
Thanks
Original message From: Yogesh Sangvikar
Date: 9/19/17 4:33 AM (GMT-08:00) To:
users@kafka.apache.org Cc: Sumit Arora ,
Bharathreddy Sodinapalle ,
asgar@happiestminds.com Subject: Re: Data
Hi Yogesh,
A few questions:
1. Please share the code for the test script.
2. At which point in the sequence below was the code for the brokers
updated to 0.10.2?
3. When doing a rolling restart, it's generally a good idea to ensure that
there are no under-replicated partitions.
4. Is controlled s
Hi Everyone,
We recently upgraded our cluster from 0.9.0.1 to 0.10.0.1 but had to keep
our Kafka clients at 0.9.0.1. We now want to upgrade our clients and,
concurrently, the message version to 0.10.0.1.
When we did the 0.9.0.1 -> 0.10.0.1 broker upgrade we were not able to
upgrade the kafka clie
0.10.0.1 consumers understand the older formats. So, the conversion only
happens when the message format is newer than what the consumer
understands. For the producer side, the conversion is not particularly
costly since the data is in the heap and, if you use compression, 0.9.0.x
would do recompre
Ah, cool, thanks Ismael!
--John
On Tue, Sep 19, 2017 at 10:20 AM, Ismael Juma wrote:
> 0.10.0.1 consumers understand the older formats. So, the conversion only
> happens when the message format is newer than what the consumer
> understands. For the producer side, the conversion is not particula
Hi Everyone,
Since Exactly once and transactional semantics is the most important and
proudest feature at 0.11.x version, why doesn’t MirrorMaker apply the Exactly
once and transactional semantics at 0.11.x?
I think it will shake our confidence to upgrading to 0.11.x to use the
idempotent sema
Hello,
I'm using Kafka 0.10.1.1
I set up my cluster Kafka + zookeeper on three nodes (three brokers, one topic,
6 partitions, 3 replicas)
When I send messages using Kafka producer (independent node), sometimes I get
this error and I couldn't figure out how to solve it.
" org.apache.kafka.c
Hi Apurva,
My transactions are pretty small : only one producer.send to kafka in this
particular case (even if I have tested with up to 100)
The producer code is embedded in an app linked with JDBC connection to
some Database.
I tested kafka-producer-perf-test.sh : not sure to clearly underst
What is the retention time on the topic you are publishing to?
From: MAHA ALSAYASNEH
Sent: Tuesday, September 19, 2017 10:25:15 AM
To: users@kafka.apache.org
Subject: Question about Kafka
Hello,
I'm using Kafka 0.10.1.1
I set up my cluster Kafka + zookeeper on
Well I kept the defualt:
log.retention.hours=168
Here are my broker configurations:
# Server Basics #
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=3
host.name=
port=9092
zookeeper.co
Hi, Kafka Users,
In the documentation for replication throttling it is mentioned that it
should be removed after partitions moved or a broker completed bootstrap (
https://cwiki.apache.org/confluence/display/KAFKA/KIP-73+Replication+Quotas#KIP-73ReplicationQuotas-2.HowdoIthrottleabootstrappingbrok
Based on the prerequisites mentioned on Github, Confluent platform seems to
be required for using KSQL:
https://github.com/confluentinc/ksql/blob/0.1.x/docs/quickstart/quickstart-non-docker.md#non-docker-setup-for-ksql
Did anyone try KSQL against vanilla Apache Kafka?
Thanks!
Hello,
I am using Kafka broker and Java client library v 0.11.0.0.
When I restart my Kafka consumer application which uses Java Kafka client
library to retrieve messages, I notice that for each partition, the message
associated with the last offset that was committed successfully gets
re-consumed
> On Sep 15, 2017, at 23:08, Amir Nagri wrote:
>
> Were you able to resolve above?
No, not yet. And I haven’t had a chance to open that JIRA ticket… sorry about
that. Will try to get to it soon.
Software Architect @ Park Assist » http://tech.parkassist.com/
Hello All -
I was able to set up SSL for the Kafka brokers, using OpenSSL.
however, I'm having issues with setting up SSL using the pem file (i.e. SSL
certificate - certified by CA, provided by the company)
Here is what i've done -
created the server/client keystore & truststore files and importe
Is it possible to change the replication factor in runtime? We're using
10.x version.
Thanks,
Devendar
You can do this using the kafka-reassign-partitions tool (or using a 3rd
party tool like kafka-assigner in github.com/linkedin/kafka-tools) to
explicitly assign the partitions to an extra replica, or remove a replica.
-Todd
On Tue, Sep 19, 2017 at 3:45 PM, Devendar Rao
wrote:
> Is it possible
we are using the other components of confluent platform without installing
the confluent platform, and its no problem at all. i dont see why it would
be any different with this one.
On Tue, Sep 19, 2017 at 1:38 PM, Buntu Dev wrote:
> Based on the prerequisites mentioned on Github, Confluent plat
Those prerequisites are just for the Confluent CLI used in the quickstart. The
Apache Kafka and Zookeeper versions included in the Confluent distribution are
the latest and the same as the Apache Kafka download so it will work. You will
just need to start Zookeeper and Kafka with the shell scrip
Hi,
If you are using commitSync(Map offsets)
api, then the committed offset
should be the next message your application will consume, i.e.
lastProcessedMessageOffset + 1.
https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#commitSync(java.util.Map)
On Wed,
Hello,
Any suggestion regarding this msg:
" org.apache.kafka.common.errors.TimeoutException: Expiring 61 record(s) for
due to 30001 ms has passed since batch creation plus linger time "
Thanks in advance
Maha
From: "MAHA ALSAYASNEH"
To: "users"
Sent: Tuesday, September 19, 2017 6
22 matches
Mail list logo