bin/kafka-console-producer.sh add --producer.config
your_producer.properties file.
you don't add --producer.config , you still use the default value of
max.request.size.

sarath reddy <putthasar...@gmail.com> 于2018年9月21日周五 上午12:27写道:

> ++pushkar
>
> On Thu 20 Sep, 2018, 16:20 sarath reddy, <putthasar...@gmail.com> wrote:
>
> > Hi Team,
> >
> > We are trying to configure Kafka to produce larger messages,
> >
> > Below are the configs:-
> >
> > Server.properties
> >
> > message.max.bytes=100000000
> > replica.fetch.max.bytes=100000001
> >
> > Producer.properties
> >
> > max.request.size=100000000
> > compression.type=gzip
> >
> > Consumer.properties
> >
> > fetch.message.max.bytes=100000000
> >
> > When trying to execute a larger file with below command
> > bin/kafka-console-producer.sh --broker-list localhost:9092 --topic
> > metadata_upload < /export/home/Userid/metadata/TopUBP-1_metadata.json
> >
> > Getting below error:-
> >
> > >[2018-09-19 11:20:03,307] ERROR Error when sending message to topic
> > metadata_upload with key: null, value: 2170060 bytes with error:
> > (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> > org.apache.kafka.common.errors.RecordTooLargeException: The message is
> > 2170148 bytes when serialized which is larger than the maximum request
> size
> > you have configured with the max.request.size configuration.
> >
> > Thanks,
> > Sarath Reddy.
> >
> >
> >
>

Reply via email to