Hi Adrien,

I set fetch.max.wait.ms to 1500 ms and ran it again. It still isn't crossing 
1150 records per fetch. On the producer side (using kakfa-producer-perf-test), 
its producing about 30,000 records/sec and a million records in total.

I tried different configurations amongst these three parameters 
(max.poll.records, fetch.min.bytes, fetch.max.wait.ms). All the other 
parameters are unchanged and have their default values. I have not changed any 
of the broker configs either. I read the Kafka documentation but could not find 
any other parameters that could impact the fetch size. 

Is there some other consumer or broker parameter that I might be missing?

Thanks,
Vishnu

On 7/18/18, 1:48 PM, "adrien ruffie" <adriennolar...@hotmail.fr> wrote:

    Hi Vishnu,
    
    do you have check your fetch.max.wait.ms value ?
    
    it may not be long enough time to wait until you recover your 5000 records 
...
    
    maybe just enough time to recover only 1150 records.
    
    
    fetch.max.wait.ms
    
    By setting fetch.min.bytes, you tell Kafka to wait until it has enough data 
to send before responding to the consumer. fetch.max.wait.ms lets you control 
how long to wait. By default, Kafka will wait up to 500 ms. This results in up 
to 500 ms of extra latency in case there is not enough data flowing to the 
Kafka topic to satisfy the minimum amount of data to return. If you want to 
limit the potential latency (usually due to SLAs controlling the maximum 
latency of the application), you can set fetch.max.wait.ms to a lower value. If 
you set fetch.max.wait.ms to 100 ms and fetch.min.bytes to 1 MB, Kafka will 
receive a fetch request from the consumer and will respond with data either 
when it has 1 MB of data to return or after 100 ms, whichever happens first.
    
    Best regards,
    
    
    Adrien
    
    ________________________________
    De : Vishnu Manivannan <vis...@arrcus.com>
    Envoyé : mercredi 18 juillet 2018 21:00:50
    À : users@kafka.apache.org
    Objet : Kafka Connect: Increase Consumer Consumption
    
    Hi,
    
    I am currently working with a single Kafka broker and a single Kafka 
consumer. I am trying to get the consumer to fetch more records, so I can 
increase the batch size when I write the data to a DB.
    
    Each record is about 1 KB and I am trying to fetch at least 5000 records 
each time. So, I changed the configurations for the following consumer 
parameters:
    
      *   max.poll.records = 5000
      *   fetch.min.bytes = 5120000
    
    For some reason, the maximum number of records fetched each time does not 
go above 1150. Are there any other parameters that I should look into or any 
changes I should make to the current configurations?
    
    Thanks,
    Vishnu
    
    
    

Reply via email to