in the consumer.properties file, I've got (default?):

zookeeper.connect=127.0.0.1:2181

zookeeper.connection.timeout.ms=1000000

group.id=test-consumer-group

thanks,

-Louis


On Thu, Jun 26, 2014 at 6:04 PM, Guozhang Wang <wangg...@gmail.com> wrote:

> Hi Louis,
>
> What are your consumer's config properties?
>
> Guozhang
>
>
> On Thu, Jun 26, 2014 at 5:54 PM, Louis Clark <sfgypsy...@gmail.com> wrote:
>
>> Hi, I'm trying to stream large message with Kafka into Spark.  Generally
>> this has been working nicely, but I found one message (5.1MB in size)
>> which
>> is clogging my pipeline up.  I have these settings in server.properties:
>> fetch.message.max.bytes=10485760
>> replica.fetch.max.bytes=10485760
>> message.max.bytes=10485760
>> fetch.size=10485760
>>
>> I'm not getting any obvious errors in the logs and I can retrieve the
>> large
>> message with this command:
>> kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning
>> --topic mytopic --fetch-size=10485760
>>
>> I noticed recently after digging into this problem that the
>> kafkaServer.out
>> log is complaining that the fetch.message.max.bytes parameter is not
>> valid:
>> [2014-06-25 11:33:36,547] WARN Property fetch.message.max.bytes is not
>> valid (kafka.utils.VerifiableProperties)
>> [2014-06-25 11:33:36,547] WARN Property fetch.size is not valid
>> (kafka.utils.VerifiableProperties)
>> That seems like the most critical parameter for my needs.  It is
>> apparently
>> not recognizing that it is a parameter despite it being listed on the
>> configuration website (https://kafka.apache.org/08/configuration.html).
>>  I'm using 0.8.1.1.  Any ideas?
>>
>> many thanks for reading this!
>>
>
>
>
> --
> -- Guozhang
>

Reply via email to