Hey,
You should try setting topic level config by doing kafka-topics.sh --alter
--topic --config = --zookeeper
Make sure you also set segment.ms for topics which are not that populous.
This setting specifies amount of time after which a new segment is rolled.
So Kafka deletes only those message
This is helpful. Thanks a lot :-)
On Tue, May 29, 2018 at 11:47 PM Matthias J. Sax
wrote:
> ConsumerRecord#timestamp()
>
> similar to ConsumerRecord#key() and ConsumerRecord#value()
>
>
> -Matthias
>
> On 5/28/18 11:22 PM, Shantanu Deshmukh wrote:
> > But then I wonder, why such things are not m
So can we roll segments more often? If the segments are small enough
probability of messages in a single segment reaching expiry will be higher.
However, will frequent roll-up of segments cause some side effects? Like
increased CPU, memory usage etc?
On Tue, May 29, 2018 at 11:52 PM Matthias J. Sa
Hi
I met this @kafka_v0.9.0.1, and solved by set the topic config.
You can have a try with kafka tool kafka-topic.sh and change it's config with
param --config
Good luck
>-Original Message-
>From: Thomas Hays [mailto:hay...@gmail.com]
>Sent: Wednesday, May 30, 2018 12:02 AM
>To: users
Hey Ryan,
I ran into a similar issue and it was how the RoundRobinAssignor/Partitioner
was hashing the keys in my messages. You may want to look at how thats
implemented and see if its causing all of your messages to end up in the
same partition.
For what its worth, this ticket has the implement
Hi Kafka users,
*tldr questions;*
*1. Is it normal or expected for the coordinator load state to last for 6
hours? Is this load time affected by log retention settings, message
production rate, or other parameters?*
*2. Do non-pykafka clients handle COORDINATOR_LOAD_IN_PROGRESS by consuming
only
Hi all,
I'm running into a weird slowness when using acks=all on Kafka 1.0.1.
I reproduced it on a 3-node cluster (each 4 cores/14GB RAM), using a topic
with replication factor 2.
I used the built-in kafka-producer-perf-test.sh tool with 1KB messages.
With all defaults, it can send 100K-200K messag
Hi, I'm using Kafka version 0.10.2.0 and trying to use Mirrormaker to the
messages from one Kafka cluster to another.
The source and target Kafka cluster are pretty much set up the same...
replication factor is 3, number of partitions is 3, auto.create.topics.enable
is true.
I am finding tha
About the docs:
Config `cleanup.policy` states:
> A string that is either "delete" or "compact".
> This string designates the retention policy to
> use on old log segments. The default policy> ("delete") will discard old
> segments when their
> retention time or size limit has been reached.> The
ConsumerRecord#timestamp()
similar to ConsumerRecord#key() and ConsumerRecord#value()
-Matthias
On 5/28/18 11:22 PM, Shantanu Deshmukh wrote:
> But then I wonder, why such things are not mentioned anywhere in Kafka
> configuration document? I relied on that setting and it caused us some
> issue
Thanks for your suggestion. However, this doesn't seem applicable for our
Kafka version. We are using 0.10.0.1
On Tue, May 29, 2018 at 7:04 PM Manikumar wrote:
> Pls check "group.initial.rebalance.delay.ms" broker config property. This
> will be the delay for the initial consumer rebalance.
>
>
A single topic does not appear to be honoring the retention.ms
setting. Three other topics (plus __consumer_offsets) on the Kafka
instance are deleting segments normally.
Kafka version: 2.12-0.10.2.1
OS: CentOS 7
Java: openjdk version "1.8.0_161"
Zookeeper: 3.4.6
Retention settings (from kafka-to
This is a good article on LinkedIn site - I think it's a good item to read
before hitting complicated designs
https://www.linkedin.com/pulse/exactly-once-delivery-message-distributed-system-arun-dhwaj/
On 29 May 2018 at 14:34, Thakrar, Jayesh
wrote:
> For more details, see https://www.slidesha
For more details, see https://www.slideshare.net/JayeshThakrar/kafka-68540012
While this is based on Kafka 0.9, the fundamental concepts and reasons are
still valid.
On 5/28/18, 12:20 PM, "Hans Jespersen" wrote:
Are you seeing 1) duplicate messages stored in a Kafka topic partition or
2)
Pls check "group.initial.rebalance.delay.ms" broker config property. This
will be the delay for the initial consumer rebalance.
from docs
"The rebalance will be further delayed by the value of
group.initial.rebalance.delay.ms as new members join the group,
up to a maximum of max.poll.interval.m
I cannot because there are messages which need high priority. Setting poll
interval to 4 second means there might be delay of 4 seconds + regular
processing time, which is not desirable.
Also, will it impact heartbeating?
On Tue, May 29, 2018 at 6:17 PM M. Manna wrote:
> Have you tried increase
No, no dynamic topic creation.
On Tue, May 29, 2018 at 6:38 PM Jaikiran Pai
wrote:
> Are your topics dynamically created? If so, see this
> threadhttps://www.mail-archive.com/dev@kafka.apache.org/msg67224.html
>
> -Jaikiran
>
>
> On 29/05/18 5:21 PM, Shantanu Deshmukh wrote:
> > Hello,
> >
> > W
Are your topics dynamically created? If so, see this
threadhttps://www.mail-archive.com/dev@kafka.apache.org/msg67224.html
-Jaikiran
On 29/05/18 5:21 PM, Shantanu Deshmukh wrote:
Hello,
We have 3 broker Kafka 0.10.0.1 cluster. We have 5 topics, each with 10
partitions. I have an application
Have you tried increase the poll time higher, e.g. 4000 and see if that
helps matters?
On 29 May 2018 at 13:44, Shantanu Deshmukh wrote:
> Here is the code which consuming messages
>
>
> while(true && startShutdown == false) {
> Context context = new Context();
> JSONObject noti
Here is the code which consuming messages
while(true && startShutdown == false) {
Context context = new Context();
JSONObject notifJSON = new JSONObject();
String notificationMsg = "";
NotificationEvent notifEvent = null;
initializeContext();
try {
consumer
Thanks..
Where is your consumer code that is consuming messages?
On 29 May 2018 at 13:18, Shantanu Deshmukh wrote:
> No problem, here are consumer properties
> -
> auto.commit.interval.ms = 3000
> auto.offset.reset = latest
> bootstrap.servers = [x.x.x.x:9092, x.x.x.x:9092, x.x.x.x:9092
No problem, here are consumer properties
-
auto.commit.interval.ms = 3000
auto.offset.reset = latest
bootstrap.servers = [x.x.x.x:9092, x.x.x.x:9092, x.x.x.x:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 54
enable.auto.commit = true
exclude.internal.topics = true
fetch.m
Hi,
It's not possible to answer questions based on text. You need to share your
consumer.properties, and server.properties file, and also, what exactly you
have changed from default configuration.
On 29 May 2018 at 12:51, Shantanu Deshmukh wrote:
> Hello,
>
> We have 3 broker Kafka 0.10.0.1 c
Hello,
We have 3 broker Kafka 0.10.0.1 cluster. We have 5 topics, each with 10
partitions. I have an application which consumes from all these topics by
creating multiple consumer processes. All of these consumers are under a
same consumer group. I am noticing that every time we restart this
appli
In one of my consumer application, I saw that 3 topics with 10 partitions
each were getting consumed by 5 different consumers having same consumer
group. And this application is seeing a lot of rebalances. Hence, I was
wondering about this.
On Tue, May 29, 2018 at 1:57 PM M. Manna wrote:
> topic
topic and consumer group have 1-to-many relationship. Each topic partition
will have the messages guaranteed to be in order. Consumer rebalance issues
can be adjusted based on the backoff and other params. What is exactly your
concern regarding consumer group and rebalance?
On 29 May 2018 at 08:
Hello,
Is it wise to use a single consumer group for multiple consumers who
consume from many different topics? Can this lead to frequent rebalance
issues?
We are using Kafka Connect to stream from a database with a JDBC Connector.
Some row were wrongly deleted, therefore we have our key-value stores that
are stale.
We thought we could solve the problem by using kafka-avro-console-producer
and produce a message with the deleted key and the null paylo
28 matches
Mail list logo