Hi,
We have our Kafka Brokers in AWS backed by st1 EBS drives. These are optimised for throughput On a kafka restart (warm restart), if some of the appropriate data is in the pagecache, everything goes well and the broker boots in just a few seconds. (1000 partitions) On a cold start (new EC2 instance which just got the volume attached), nothing is in the page cache and the first boot can take up to 10 minutes. Question: what does the kafka broker do and read on start? I would assume index files and so on, but not the actual segments, does it? If so, would it be worth considering a KIP that would allow to optimise kafka files this way: Kafka index files, or any small files, would be placed on a faster drive (say a gp2 SSD drive) Kafka segment files, or any huge files, would be placed on a throughput optimised drive. I am not sure how the KIP would shape up, or the code change, but: Is that a possible strategy to drastically improve the broker start-up time? Would that also improve shutdown time? Thanks, Stephane