Yes. That could happen.
Kafka provided at-least-once processing semantics if you commit messages
after processing.
You can avoid duplicates, if you commit offsets before processing, but
this might result in data loss.
Getting exactly-once is quite hard, and you will need to build your own
de-dup
Is it possible to receive duplicate messages from Kafka 0.9.0.1 or 0.10.1.0
when you have a topic with three partitions, and one consumer group with three
consumer clients. One client stops consuming and is taken offline. These
clients do not commit offset immediately, but the offsets are commit