We are having issues with some of our older consumers getting stuck reading a
topic. The issue seems to occur at specific offsets. Here's an excerpt from
kafka-dump-log on the topic partition around the offset in question:
baseOffset: 13920966 lastOffset: 13920987 count: 6 baseSequence: -1
last
rties:
max.request.size=15728640
consumer
max.partition.fetch.bytes
On 7/28/20, 9:51 AM, "Thomas Becker"
mailto:thomas.bec...@xperi.com>> wrote:
[External]
We have some legacy applications using an old (0.10.0.0) version of the
consumer that are hitting RecordTooLar
We have some legacy applications using an old (0.10.0.0) version of the
consumer that are hitting RecordTooLargeExceptions with the following message:
org.apache.kafka.common.errors.RecordTooLargeException: There are some messages
at [Partition=Offset]: {mytopic-0=13920987} whose size is larger
I've been experimenting with the streams SessionStore and found some behavior
that contradicts the javadoc. Specifically: I have a SessionStore, and put() a
session with key K1 and session of time T0-T5. I then call findSessions("K1",
T2, T4) and it comes back empty. I would expect the session t
my update for both three seconds,
> I will have to count the number of punctuations and calculate the missed
> stream times for myself. It's ok for me to trigger it 3 times, but the
> timestamp should not be the same in each, but should be increased by the
> schedule time in each p
I'm a bit troubled by the fact that it fires 3 times despite the stream time
being advanced all at once; is there a scenario when this is beneficial?
From: Matthias J. Sax [matth...@confluent.io]
Sent: Friday, May 12, 2017 12:38 PM
To: users@kafka.apache.o
sounds like the CRUD API's still require explicitly
including the replication factor param in the CreateTopic call.
That's essentially the crux of my question... why does the client ever need
to know the default param if the broker is already aware of it? Was this an
explicit design decis
Yes, this has been an issue for some time. The problem is that the AdminUtils
requires this info to be known client side, but there is no API to get it. I
think things will be better in 0.11.0 where we have the AdminClient that
includes support for both topic CRUD APIs (not just ZK modifications
Does this fix the problem though? The docs indicate that new data is required
for each *partition*, not topic. Overall I think the "stream time" notion is a
good thing for a lot of use-cases, but some others definitely require
wall-clock based windowing. Is something planned for this?
-Tommy
O
Couldn't this have been solved by returning a ReadOnlyKeyValueIterator
that throws an exception from remove() from the
ReadOnlyKeyValueStore.iterator()? That preserves the ability to call
remove() when it's appropriate and moves the refused bequest to when
you shouldn't.
On Thu, 2017-03-23 at 11:0
We ran into an incident a while back where one of our broker machines
abruptly went down (AWS is fun). While the leadership transitions and
so forth seemed to work correctly with the remaining brokers, our
producers hung shortly thereafter. I should point out that we are using
the old Scala produce
pon.
-Tommy
On Mon, 2016-12-05 at 11:00 -0500, Radek Gruchalski wrote:
> Hi Thomas,
>
> Defaults are good for sure. Never had a problem with default timeouts
> in AWS.
> –
> Best regards,
> Radek Gruchalski
> ra...@gruchalski.com
>
>
> On December 5, 2016 at 4:58:
I know several folks are running Kafka in AWS, can someone give me an
idea of what sort of values you're using for ZK session timeouts?
--
Tommy Becker
Senior Software Engineer
O +1 919.460.4747
tivo.com
This email and any attachments may co
The only obvious downside I'm aware of is not being able to benefit
from the bugfixes in the client. We are essentially doing the same
thing; we upgraded the broker side to 0.10.0.0 but have yet to upgrade
our clients from 0.8.1.x.
On Tue, 2016-11-29 at 09:30 -0500, Tim Visher wrote:
> Hi Everyone
14 matches
Mail list logo