Message size itself increases over the time.

Message is something like
key=[list on objects]

This increases with time and then at a point kafka is not able to add any
message to its topic because message size is greater than max.message.bytes.
Since this is an internal topic based off a table I don't know how can I
control this topic.

If I can set some retention.ms for this topic then I can purge old messages
thereby ensuring that message size stays within limit.

Thanks
Sachin



On Tue, Nov 8, 2016 at 11:22 PM, Eno Thereska <eno.there...@gmail.com>
wrote:

> Hi Sachin,
>
> Could you clarify what you mean by "message size increases"? Are messages
> going to the changelog topic increasing in size? Or is the changelog topic
> getting full?
>
> Thanks
> Eno
>
> > On 8 Nov 2016, at 16:49, Sachin Mittal <sjmit...@gmail.com> wrote:
> >
> > Hi,
> > We are using aggregation by key on a kstream to create a ktable.
> > As I read from
> > https://cwiki.apache.org/confluence/display/KAFKA/
> Kafka+Streams%3A+Internal+Data+Management
> > it creates an internal changelog topic.
> >
> > However over the time the streaming application is run message size
> > increases and it starts throwing max.message.bytes exception.
> >
> > Is there a way to control the retention.ms time for internal changelog
> > topics so that messages are purged before they exceed this size.
> >
> > If not is there a way to control or avoid such an error.
> >
> > Thanks
> > Sachin
>
>

Reply via email to