Great. Please open a jira and attach your patch there.

Thanks,

Jun


On Mon, Jan 20, 2014 at 10:37 PM, Bae, Jae Hyeon <metac...@gmail.com> wrote:

> Nope, just packaging for Netflix cloud environment.
>
> The first one is, producer discovery(metadata.broker.list) is integrated
> with Netflix Eureka.
> The second one is, yammer metric library is connected with Netflix Servo.
> Except these two big things, I fixed a few lines to fit into our monitoring
> environment.
>
> If I have a chance, I will send Pull Request to you.
>
> Thank you
> Best, Jae
>
>
> On Mon, Jan 20, 2014 at 9:03 PM, Jun Rao <jun...@gmail.com> wrote:
>
> > What kind of customization are you performing? Are you changing the wire
> > and on-disk protocols?
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Mon, Jan 20, 2014 at 10:02 AM, Bae, Jae Hyeon <metac...@gmail.com>
> > wrote:
> >
> > > Due to short retention period, I don't have that log segment now.
> > >
> > > How I am developing kafka is,
> > >
> > > I forked apache/kafka into my personal repo and customized a little
> bit.
> > I
> > > kept tracking 0.8 branch but you seems moved to trunk branch.
> > >
> > > I will update it to trunk branch or 0.8.0 tag.
> > >
> > > Thank you
> > > Best, Jae
> > >
> > >
> > >
> > >
> > > On Mon, Jan 20, 2014 at 8:01 AM, Jun Rao <jun...@gmail.com> wrote:
> > >
> > > > Could you use our DumpLogSegment tool on the relevant log segment and
> > see
> > > > if the log is corrupted? Also, are you using the 0.8.0 release?
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > >
> > > > On Sun, Jan 19, 2014 at 10:09 PM, Bae, Jae Hyeon <metac...@gmail.com
> >
> > > > wrote:
> > > >
> > > > > Hello
> > > > >
> > > > > I finally upgraded kafka 0.7 to kafka 0.8 and a few kafka 0.8
> > clusters
> > > > are
> > > > > being tested now.
> > > > >
> > > > > Today, I got alerted with the following messages:
> > > > >
> > > > >  "data": {
> > > > >     "exceptionMessage": "Found a message larger than the maximum
> > fetch
> > > > size
> > > > > of this consumer on topic nf_errors_log partition 0 at fetch offset
> > > > > 76736251. Increase the fetch size, or decrease the maximum message
> > size
> > > > the
> > > > > broker will allow.",
> > > > >     "exceptionStackTrace":
> > "kafka.common.MessageSizeTooLargeException:
> > > > > Found a message larger than the maximum fetch size of this consumer
> > on
> > > > > topic nf_errors_log partition 0 at fetch offset 76736251. Increase
> > the
> > > > > fetch size, or decrease the maximum message size the broker will
> > allow.
> > > > >     "exceptionType": "kafka.common.MessageSizeTooLargeException"
> > > > >   },
> > > > >   "description": "RuntimeException aborted realtime
> > > > > processing[nf_errors_log]"
> > > > >
> > > > > What I don't understand is, I am using all default properties,
> which
> > > > means
> > > > >
> > > > > broker's message.max.bytes is 1000000
> > > > > consumer's fetch.message.max.bytes is 1024 * 1024 greater than
> > broker's
> > > > > message.max.bytes
> > > > >
> > > > > How could this happen? I am using snappy compression.
> > > > >
> > > > > Thank you
> > > > > Best, Jae
> > > > >
> > > >
> > >
> >
>

Reply via email to