Hi there!

I'm new grad engineer and is pretty new to kafka world.

I'm trying to replace rabbit mq with apache-kafka and while planning, I
bumped in to several conceptual planning problem.

First we are using rabbit mq for per user queue policy meaning each user
uses one queue. This suits our need because each user represent some job to
be done with that particular user, and if that user causes a problem, the
queue will never have a problem with other users because queues are
seperated ( Problem meaning messages in the queue will be dispatch to the
users using http request. If user refuses to receive a message (server down
perhaps?) it will go back in retry queue, which will result in no loses of
message (Unless queue goes down))

Now kafka is fault tolerant and failure safe because it write to a disk.
And its exactly why I am trying to implement kafka to our structure.

but there are problem to my plannings.

First, I was thinking to create as many topic as per user meaning each user
would have each topic (What problem will this cause? My max estimate is
that I will have around 1~5 million topics)

Second, If I decide to go for topics based on operation and partition by
random hash of users id, if there was a problem with one user not consuming
message currently, will the all user in the partition have to wait ? What
would be the best way to structure this situation?

So as conclusion, 1~5 millions users. We do not want to have one user
blocking large number of other users being processed. Having topic per user
will solve this issue, it seems like there might be an issue with zookeeper
if such large number gets in (Is this true? )

what would be the best solution for structuring? Considering scalability?

Reply via email to