Does it work if you use the console consumer from 0.10.1 and/or 0.10.0? Ismael
On Wed, Dec 21, 2016 at 5:06 AM, Ofir Sharony <ofir.shar...@myheritage.com> wrote: > Hi guys, > > I'm trying to consume our Kafka topics using Kafkacat > <https://github.com/edenhill/kafkacat>. > We were able to successfully consume messages using the the default > compression.type (producer, i.e without compression in our case). > When we changed the compression type to either compression algorithm (gzip > / snappy / lz4), the consumption fails immediately with a fatal error. > > Here's the command: > > kafkacat -X api.version.request=true -b localhost -t <topic> -o end -C -c 1 > -v -p 4 > > And the response received when consuming a topic configured with any of the > compression algorithms: > > % Fatal error at consume_cb:407: > > % ERROR: Topic maxwell_with_request_site_sharded [4] error: Message at > offset 733099 is too large to fetch, try increasing > receive.message.max.bytes > > The response is the same for all offsets, in different partitions and all > our topics. > Please notice the messages size is quite small (smaller than the value of > receive.message.max.bytes), > I'm using the latest Kafka version (0.10.1). > > Please advise. > Thanks, > > *Ofir Sharony* > BackEnd Tech Lead > > Mobile: +972-54-7560277 <+972%2054-756-0277> | ofir.shar...@myheritage.com > | www.myheritage.com > MyHeritage Ltd., 3 Ariel Sharon St., Or Yehuda 60250, Israel > > <http://www.myheritage.com/> > > <https://www.facebook.com/myheritage> > <https://twitter.com/myheritage> <http://blog.myheritage.com/> > <https://www.youtube.com/user/MyHeritageLtd> >