Are both consumers in the same consumer group, ie, use the same `group.id`?
If yes, how many partitions does the topic have? If it has only one
partition, the observed behavior is expected, because a single
partitions can only be read by a single consumer instance per consumer
group. The second co
Hi,
I want to convince my company to use kafka as our message queue system, was
wondering if it's possible to run kafka on a single machine. We have a
unique use case where all our microservices is run on a single machine and
currently we are using apache amq as our message queue.
Apache amq is g
Hi,
I am quite new to Kafka, and I have encountered a weird case during QA
stage for our application. Now we have 2 consumers consuming same topic in
kafka cluster. The first started consumer works fine and get closed after
getting all the messages. After that, the second one started and just hang
Hi Chao,
I suppose, you would like to know:
within a consumer group which message is coming from which partition, since
partitions corresponds to broker and broker = ip, right?
Well, if you really want to know this, then you have to get the context. E.g
within a processor there is a method call
hi:
I am a gteen hand of kafka,after installation of new kafka cluster,it
raises a question that how do I know which IP address a message come from?
Is there any special set about this?
Eagerly awaiting your reply.
chao@baozun.com
You're saying that with a 100ms commit interval, caching won't help because
it would still send the compacted changes to the changelog every 100ms?
Regarding the custom state store I'll look into that because I didn't go
much further than transformers and stores in my kafka experience so I'll
need
Alright, well I see why you have so much data being sent to the changelog
if each
update involves appending to a list and then writing in the whole list. And
with 340
records/minute I'm actually not sure how the cache could really help at all
when it's
being flushed every 100ms.
Here's kind of a w
Hi Sophie,
Just to give a better context, yes we use EOS and the problem happens in
our aggregation store.
Basically when windowing data we append each record into a list that's
stored in the aggregation store.
We have 2 versions, in production we use the kafka streams windowing API,
in staging we
It's an LRU cache, so once it gets full new records will cause older ones
to be evicted (and thus sent
downstream). Of course this should only apply to records of a different
key, otherwise it will just cause
an update of that key in the cache.
I missed that you were using EOS, given the short com
Hi Sophie,
thanks fo helping.
By eviction of older records you mean they get flushed to the changelog
topic?
Or the cache is just full and so all new records go to the changelog topic
until the old ones are evicted?
Regarding the timing, what timing do you mean? Between when the cache stops
and
It might be that the cache appears to "stop working" because it gets full,
and each
new update causes an eviction (of some older record). This would also
explain the
opposite behavior, that it "starts working" again after some time without
being restarted,
since the cache is completely flushed on c
Hello Kafka users, developers and client-developers,
This is the fifth candidate for release of Apache Kafka 2.4.0.
This release includes many new features, including:
- Allow consumers to fetch from closest replica
- Support for incremental cooperative rebalancing to the consumer rebalance
proto
Hi all,
We have merged the PR for KAFKA-9212. Thanks to Jason for the fixing the
issue.
Thanks to Yannick for reporting the issue and Michael Jaschob for providing
extra details.
I am canceling this vote and will create new RC shortly.
Thanks,
Manikumar
On Sat, Dec 7, 2019 at 3:19 AM Israel Ekp
Hi,
I am experimenting with Mirror Make 2 in 2.4.0-rc3. It seems to start up
fine, connects to both source and destination, creates new topics...
But it does not start to actually mirror the messages until about 12
minutes after MM2 was started. I would expect it to start mirroring in some
seconds
Hello Soman,
again, hard to tell, this is what the docs say:
"...if you are upgrading from a version prior to 0.11.0.x, then
CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
"
also:
"...Once the brokers begin using the latest protocol version, it will no
longer be po
And it seems that for some reason after a while caching works again without
a restart of the streams application
[image: Screen Shot 2019-12-08 at 11.59.30 PM.png]
I'll try to enable debug metrics and see if I can find something useful
there.
Any idea is appreciated in the meantime :)
--
Alessan
16 matches
Mail list logo