Hello All,
I am able to run Kafka in debug mode in IntelliJ CE but the info logs are
not showing up in the console.
Can someone please help me in this regard?
Thanks,
I don't think that is the case. The lag is huge ~10^5 records.
On Thu, Jun 27, 2019 at 9:13 AM Srinath C wrote:
> Ok Garvit I still don't see the image but based on these inputs you
> provided I'm thinking that the possible scenario could be that between two
> polls from the consumer:
> (a) the
Ok Garvit I still don't see the image but based on these inputs you
provided I'm thinking that the possible scenario could be that between two
polls from the consumer:
(a) the number of records added to the partitions already consumed in
previous poll is 500 or more (max.poll.records)
or
(b) the si
Hi Srinath,
I have attached the image.
The partitions belong to the same topic only. I have not explicitly set
max.partition.fetch.bytes or fetch.max.bytes or max.poll.records so it
should take the default values.
Let me know.
Thanks,
On Thu, Jun 27, 2019 at 7:11 AM Srinath C wrote:
> Hi Gar
Hi Garvit,
Am unable to see the image you attached for some reason and am not able to
see if the partitions are in the same topic or in different topics.
Check if any of max.partition.fetch.bytes or fetch.max.bytes or
max.poll.records configured in your consumer is causing the behaviour.
Regards,
Is there a correlation between the lagging partitions and the consumer assigned
to them?
> On Jun 26, 2019, at 4:25 PM, Garvit Sharma wrote:
>
> Can anyone please help me with this.
>
> On Wed, Jun 26, 2019 at 8:56 PM Garvit Sharma wrote:
>
>> Hey Steve,
>>
>> I have checked, count of messa
Can anyone please help me with this.
On Wed, Jun 26, 2019 at 8:56 PM Garvit Sharma wrote:
> Hey Steve,
>
> I have checked, count of messages on all the partitions are same.
>
> I am still exploring an approach using which the root cause could be
> determined.
>
> Thanks,
>
> On Wed, Jun 26, 2019
Hi Kiran
You can use the RocksDBConfigSetter and pass
options.setMaxOpenFiles(100);
to all RocksDBs for the Streams application which limits how many are
kept open at the same time.
best regards
Patrik
On Wed, 26 Jun 2019 at 16:14, emailtokir...@gmail.com <
emailtokir...@gmail.com> wrote:
>
Initially it started in the testing. QA reported problems where "events" were
not detected after they finished their testing. After this discussion, my
proposal was to send a few more records to cause the windows to flush so that
the suppressed event would show up. Now it looks to me, these few
Hey Steve,
I have checked, count of messages on all the partitions are same.
I am still exploring an approach using which the root cause could be
determined.
Thanks,
On Wed, Jun 26, 2019 at 8:07 PM Garvit Sharma wrote:
> I am not sure about that. Is there a way to analyse that ?
>
> On Wed, J
Hi Mohan,
I see where you're going with this, and it might indeed be a
challenge. Even if you send a "dummy" message on all input topics, you
won't have a guarantee that after the repartition, the dummy message
is propagated to all partitions of the repartition topics. So it might
be difficult to
Sure Bill, sure, is the same code I have reported the issue for the
suppress some months ago:
https://stackoverflow.com/questions/54145281/why-do-the-offsets-of-the-consumer-group-app-id-of-my-kafka-streams-applicatio
In fact, I have reported at that moment, that after restarting the app, the
supp
Thanks for the reply Jonathan.
Are you in a position to share your code so I can try to reproduce on my
end?
-Bill
On Wed, Jun 26, 2019 at 10:23 AM Jonathan Santilli <
jonathansanti...@gmail.com> wrote:
> Hello Bill,
>
> am implementing the TimestampExtractor Interface, then using it to consum
I am not sure about that. Is there a way to analyse that ?
On Wed, Jun 26, 2019 at 7:35 PM Steve Howard
wrote:
> Hi Garvit,
>
> Are the slow partitions "hot", i.e., receiving a lot more messages than
> others?
>
> Thanks,
>
> Steve
>
> On Wed, Jun 26, 2019, 9:56 AM Garvit Sharma
> > Just to add
Hello Bill,
am implementing the TimestampExtractor Interface, then using it to consume,
like:
*final* KStream<..., ...> events = builder.stream(inputTopicList, Consumed.
*with*(keySerde, valueSerde).withTimestampExtractor(*new *OwnTimeExtractor(
...)));
Am not setting the default.timestamp.extra
Hi Jonathan,
Thanks for reporting this. Which timestamp extractor are you using in the
configs?
Thanks,
Bill
On Wed, Jun 26, 2019 at 9:14 AM Jonathan Santilli <
jonathansanti...@gmail.com> wrote:
> Hello, hope you all are doing well,
>
> am testing the new version 2.3 for Kafka Streams specifi
Hi,
We are using Kafka streams DSL APIs for doing some counter aggregations
(running on OpenJDK 11.0.2). Our topology has some 400 sub topologies & we are
using 8 partitions in source topic. When we start pumping more load, we start
getting RockDBException stating "too many open files".
Here a
Hi,
I am new to Kafka and observed a strange behavior. (I am using Kafka through
the spring-kafka library)
I think I misunderstand something about the "liveness" of the data in a global
table.
I thought if I define and materialize a GlobalKTable at the startup of my
application, and messages a
Hi Garvit,
Are the slow partitions "hot", i.e., receiving a lot more messages than
others?
Thanks,
Steve
On Wed, Jun 26, 2019, 9:56 AM Garvit Sharma Just to add more details, these consumers are processing the Kafka events
> and writing to DB(fast write guaranteed).
>
> On Wed, Jun 26, 2019 at
Just to add more details, these consumers are processing the Kafka events
and writing to DB(fast write guaranteed).
On Wed, Jun 26, 2019 at 7:23 PM Garvit Sharma wrote:
> Hi All,
>
> I can see huge consumer lag in a few partitions of Kafka topic. I need to
> know the root cause of this issue.
>
Hi All,
I can see huge consumer lag in a few partitions of Kafka topic. I need to
know the root cause of this issue.
Please let me know, how to proceed.
Below is sample consumer lag data :
[image: image.png]
Thanks,
sorry, wrong email :(
On Wed, 26 Jun 2019 at 14:20, Giovanni Colapinto <
giovanni.colapi...@ammeon.com> wrote:
> Hi Ashok
>
> Could you please open a support ticket. Please put in it a detailed reason
> why you want to update zookeeper.
>
> Thanks,
> Giovanni
>
> On Wed, 26 Jun 2019 at 13:08, ASH
Hi Ashok
Could you please open a support ticket. Please put in it a detailed reason
why you want to update zookeeper.
Thanks,
Giovanni
On Wed, 26 Jun 2019 at 13:08, ASHOK MACHERLA wrote:
> Dear Team
>
> Currently we are using Zookeeper 3.4.6 version, we are planning to upgrade
> the zookeeper
Hello, hope you all are doing well,
am testing the new version 2.3 for Kafka Streams specifically. I have
noticed that now, the implementation of the method extract from the
interface org.apache.kafka.streams.processor.TimestampExtractor
*public* *long* extract(ConsumerRecord record, *long*
previ
Dear Team
Currently we are using Zookeeper 3.4.6 version, we are planning to upgrade the
zookeeper version to 3.5.5.
Already Kafka only is upgraded to new version 2.2.1,
So zookeeper upgradation also required.
Could please tell me how to upgrade zookeeper version??
is there any documentation fo
25 matches
Mail list logo